Test Report: Docker_Linux_crio 17485

                    
                      8dc642b39e51c59087e6696ac1afe8c1c527ee77:2023-10-24:31589
                    
                

Test fail (6/302)

Order failed test Duration
28 TestAddons/parallel/Ingress 157.85
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 185.22
209 TestMultiNode/serial/PingHostFrom2Pods 3.65
230 TestRunningBinaryUpgrade 69.73
245 TestStoppedBinaryUpgrade/Upgrade 77.06
256 TestPause/serial/SecondStartNoReconfiguration 62.32
x
+
TestAddons/parallel/Ingress (157.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-291433 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-291433 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-291433 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f0159324-5a21-4c2a-b5ae-0149e1b3c22a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f0159324-5a21-4c2a-b5ae-0149e1b3c22a] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.095391273s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-291433 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.170148512s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-291433 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-291433 addons disable ingress-dns --alsologtostderr -v=1: (1.49122219s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-291433 addons disable ingress --alsologtostderr -v=1: (7.767331911s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-291433
helpers_test.go:235: (dbg) docker inspect addons-291433:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "afce13c26fb844e79a252a5377c8862668e2cefb073f8bd458ca6d536c4cf2d6",
	        "Created": "2023-10-24T19:01:19.18689036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479888,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:01:19.534258215Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/afce13c26fb844e79a252a5377c8862668e2cefb073f8bd458ca6d536c4cf2d6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/afce13c26fb844e79a252a5377c8862668e2cefb073f8bd458ca6d536c4cf2d6/hostname",
	        "HostsPath": "/var/lib/docker/containers/afce13c26fb844e79a252a5377c8862668e2cefb073f8bd458ca6d536c4cf2d6/hosts",
	        "LogPath": "/var/lib/docker/containers/afce13c26fb844e79a252a5377c8862668e2cefb073f8bd458ca6d536c4cf2d6/afce13c26fb844e79a252a5377c8862668e2cefb073f8bd458ca6d536c4cf2d6-json.log",
	        "Name": "/addons-291433",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-291433:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-291433",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4987687a7093b3584d4b650381dc4c13635483d2a3f29dee1d2ff2dc02dde76d-init/diff:/var/lib/docker/overlay2/a59d6c70e56c008d6cc4bbed94412eb512943c9d608e3d99495b95d6ce6d39c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4987687a7093b3584d4b650381dc4c13635483d2a3f29dee1d2ff2dc02dde76d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4987687a7093b3584d4b650381dc4c13635483d2a3f29dee1d2ff2dc02dde76d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4987687a7093b3584d4b650381dc4c13635483d2a3f29dee1d2ff2dc02dde76d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-291433",
	                "Source": "/var/lib/docker/volumes/addons-291433/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-291433",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-291433",
	                "name.minikube.sigs.k8s.io": "addons-291433",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e75d78632db949826bbee5c756f6c2ca4ec66e96b3232641f78113e14156e83",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7e75d78632db",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-291433": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "afce13c26fb8",
	                        "addons-291433"
	                    ],
	                    "NetworkID": "6a201ddb9c61d2960172753f6aba6fa68615d218b8fe62c9a8939e24a9c8b6d8",
	                    "EndpointID": "500e05d6a28e0dadf9d0ea9bffa439bc89c215b05f86e5d33c52a6902372e0a8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-291433 -n addons-291433
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-291433 logs -n 25: (1.402406752s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-712524                                                                     | download-only-712524   | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| delete  | -p download-only-712524                                                                     | download-only-712524   | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| start   | --download-only -p                                                                          | download-docker-108940 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | download-docker-108940                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-108940                                                                   | download-docker-108940 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-298100   | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | binary-mirror-298100                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41549                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-298100                                                                     | binary-mirror-298100   | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:00 UTC |
	| addons  | enable dashboard -p                                                                         | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | addons-291433                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |                     |
	|         | addons-291433                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-291433 --wait=true                                                                | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC | 24 Oct 23 19:03 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-291433 addons                                                                        | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | addons-291433                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-291433 ssh cat                                                                       | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | /opt/local-path-provisioner/pvc-0e7eeee0-250a-4774-8e1e-98736e535d77_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | -p addons-291433                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-291433 addons disable                                                                | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:04 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-291433 ip                                                                            | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	| addons  | addons-291433 addons disable                                                                | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | addons-291433                                                                               |                        |         |         |                     |                     |
	| addons  | addons-291433 addons disable                                                                | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC | 24 Oct 23 19:03 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-291433 ssh curl -s                                                                   | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:04 UTC | 24 Oct 23 19:04 UTC |
	|         | -p addons-291433                                                                            |                        |         |         |                     |                     |
	| addons  | addons-291433 addons                                                                        | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:04 UTC | 24 Oct 23 19:04 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-291433 addons                                                                        | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:04 UTC | 24 Oct 23 19:04 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-291433 ip                                                                            | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:06 UTC | 24 Oct 23 19:06 UTC |
	| addons  | addons-291433 addons disable                                                                | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:06 UTC | 24 Oct 23 19:06 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-291433 addons disable                                                                | addons-291433          | jenkins | v1.31.2 | 24 Oct 23 19:06 UTC | 24 Oct 23 19:06 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:52.980259  479219 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:52.980586  479219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:52.980605  479219 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:52.980613  479219 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:52.980897  479219 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:00:52.981812  479219 out.go:303] Setting JSON to false
	I1024 19:00:52.982870  479219 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9800,"bootTime":1698164253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:52.982945  479219 start.go:138] virtualization: kvm guest
	I1024 19:00:52.986057  479219 out.go:177] * [addons-291433] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:00:52.988440  479219 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:00:52.988441  479219 notify.go:220] Checking for updates...
	I1024 19:00:52.992471  479219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:52.994508  479219 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:00:52.996344  479219 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:00:52.998098  479219 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:00:53.000062  479219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:00:53.002318  479219 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:00:53.026019  479219 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:00:53.026157  479219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:00:53.084242  479219 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-24 19:00:53.07359236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:00:53.084377  479219 docker.go:295] overlay module found
	I1024 19:00:53.087120  479219 out.go:177] * Using the docker driver based on user configuration
	I1024 19:00:53.089454  479219 start.go:298] selected driver: docker
	I1024 19:00:53.089487  479219 start.go:902] validating driver "docker" against <nil>
	I1024 19:00:53.089520  479219 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:00:53.090432  479219 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:00:53.159643  479219 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-10-24 19:00:53.15046719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:00:53.159813  479219 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:00:53.160001  479219 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:00:53.162770  479219 out.go:177] * Using Docker driver with root privileges
	I1024 19:00:53.164634  479219 cni.go:84] Creating CNI manager for ""
	I1024 19:00:53.164664  479219 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:00:53.164677  479219 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:00:53.164695  479219 start_flags.go:323] config:
	{Name:addons-291433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-291433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:00:53.166991  479219 out.go:177] * Starting control plane node addons-291433 in cluster addons-291433
	I1024 19:00:53.169004  479219 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:00:53.170891  479219 out.go:177] * Pulling base image ...
	I1024 19:00:53.172820  479219 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:00:53.172876  479219 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:53.172894  479219 cache.go:57] Caching tarball of preloaded images
	I1024 19:00:53.172898  479219 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:00:53.173053  479219 preload.go:174] Found /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:00:53.173070  479219 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:00:53.173563  479219 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/config.json ...
	I1024 19:00:53.173593  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/config.json: {Name:mk9ff645361912b97d379f42cc33595fd14776b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:00:53.191250  479219 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:00:53.191456  479219 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1024 19:00:53.191482  479219 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1024 19:00:53.191488  479219 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1024 19:00:53.191499  479219 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1024 19:00:53.191508  479219 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1024 19:01:05.901334  479219 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1024 19:01:05.901372  479219 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:01:05.901418  479219 start.go:365] acquiring machines lock for addons-291433: {Name:mkdc0a9d607687d55c33ef9e6ed48e56f5a9bd55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:01:05.901563  479219 start.go:369] acquired machines lock for "addons-291433" in 116.411µs
	I1024 19:01:05.901590  479219 start.go:93] Provisioning new machine with config: &{Name:addons-291433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-291433 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:01:05.901699  479219 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:01:05.904251  479219 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1024 19:01:05.904584  479219 start.go:159] libmachine.API.Create for "addons-291433" (driver="docker")
	I1024 19:01:05.904623  479219 client.go:168] LocalClient.Create starting
	I1024 19:01:05.904757  479219 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem
	I1024 19:01:05.988274  479219 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem
	I1024 19:01:06.091272  479219 cli_runner.go:164] Run: docker network inspect addons-291433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:01:06.111752  479219 cli_runner.go:211] docker network inspect addons-291433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:01:06.111881  479219 network_create.go:281] running [docker network inspect addons-291433] to gather additional debugging logs...
	I1024 19:01:06.111917  479219 cli_runner.go:164] Run: docker network inspect addons-291433
	W1024 19:01:06.131133  479219 cli_runner.go:211] docker network inspect addons-291433 returned with exit code 1
	I1024 19:01:06.131194  479219 network_create.go:284] error running [docker network inspect addons-291433]: docker network inspect addons-291433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-291433 not found
	I1024 19:01:06.131215  479219 network_create.go:286] output of [docker network inspect addons-291433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-291433 not found
	
	** /stderr **
	I1024 19:01:06.131471  479219 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:01:06.150616  479219 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002829290}
	I1024 19:01:06.150656  479219 network_create.go:124] attempt to create docker network addons-291433 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1024 19:01:06.150757  479219 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-291433 addons-291433
	I1024 19:01:06.214415  479219 network_create.go:108] docker network addons-291433 192.168.49.0/24 created
	I1024 19:01:06.214446  479219 kic.go:118] calculated static IP "192.168.49.2" for the "addons-291433" container
	I1024 19:01:06.214523  479219 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:01:06.233357  479219 cli_runner.go:164] Run: docker volume create addons-291433 --label name.minikube.sigs.k8s.io=addons-291433 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:01:06.255384  479219 oci.go:103] Successfully created a docker volume addons-291433
	I1024 19:01:06.255486  479219 cli_runner.go:164] Run: docker run --rm --name addons-291433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-291433 --entrypoint /usr/bin/test -v addons-291433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:01:13.501044  479219 cli_runner.go:217] Completed: docker run --rm --name addons-291433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-291433 --entrypoint /usr/bin/test -v addons-291433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (7.245506024s)
	I1024 19:01:13.501074  479219 oci.go:107] Successfully prepared a docker volume addons-291433
	I1024 19:01:13.501109  479219 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:01:13.501132  479219 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:01:13.501194  479219 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-291433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:01:19.110763  479219 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-291433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.609518421s)
	I1024 19:01:19.110809  479219 kic.go:200] duration metric: took 5.609669 seconds to extract preloaded images to volume
	W1024 19:01:19.110983  479219 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:01:19.111105  479219 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:01:19.166138  479219 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-291433 --name addons-291433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-291433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-291433 --network addons-291433 --ip 192.168.49.2 --volume addons-291433:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:01:19.546926  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Running}}
	I1024 19:01:19.570397  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:19.593609  479219 cli_runner.go:164] Run: docker exec addons-291433 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:01:19.683185  479219 oci.go:144] the created container "addons-291433" has a running status.
	I1024 19:01:19.683224  479219 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa...
	I1024 19:01:19.769030  479219 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:01:19.796531  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:19.818773  479219 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:01:19.818803  479219 kic_runner.go:114] Args: [docker exec --privileged addons-291433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:01:19.890464  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:19.913783  479219 machine.go:88] provisioning docker machine ...
	I1024 19:01:19.913831  479219 ubuntu.go:169] provisioning hostname "addons-291433"
	I1024 19:01:19.913901  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:19.941806  479219 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:19.942288  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1024 19:01:19.942318  479219 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-291433 && echo "addons-291433" | sudo tee /etc/hostname
	I1024 19:01:19.943172  479219 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36546->127.0.0.1:33195: read: connection reset by peer
	I1024 19:01:23.088493  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-291433
	
	I1024 19:01:23.088605  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:23.106815  479219 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:23.107219  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1024 19:01:23.107241  479219 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-291433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-291433/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-291433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:01:23.230914  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:01:23.230951  479219 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-471553/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-471553/.minikube}
	I1024 19:01:23.230973  479219 ubuntu.go:177] setting up certificates
	I1024 19:01:23.230984  479219 provision.go:83] configureAuth start
	I1024 19:01:23.231067  479219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-291433
	I1024 19:01:23.252666  479219 provision.go:138] copyHostCerts
	I1024 19:01:23.252753  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem (1082 bytes)
	I1024 19:01:23.252931  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem (1123 bytes)
	I1024 19:01:23.253017  479219 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem (1675 bytes)
	I1024 19:01:23.253073  479219 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem org=jenkins.addons-291433 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-291433]
	I1024 19:01:23.470409  479219 provision.go:172] copyRemoteCerts
	I1024 19:01:23.470486  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:01:23.470527  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:23.490324  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:23.586702  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:01:23.612015  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1024 19:01:23.637778  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:01:23.661161  479219 provision.go:86] duration metric: configureAuth took 430.158599ms
	I1024 19:01:23.661194  479219 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:01:23.661388  479219 config.go:182] Loaded profile config "addons-291433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:23.661490  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:23.678709  479219 main.go:141] libmachine: Using SSH client type: native
	I1024 19:01:23.679052  479219 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33195 <nil> <nil>}
	I1024 19:01:23.679069  479219 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:01:23.914335  479219 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:01:23.914369  479219 machine.go:91] provisioned docker machine in 4.000557596s
	I1024 19:01:23.914382  479219 client.go:171] LocalClient.Create took 18.009743631s
	I1024 19:01:23.914410  479219 start.go:167] duration metric: libmachine.API.Create for "addons-291433" took 18.009827432s
	I1024 19:01:23.914423  479219 start.go:300] post-start starting for "addons-291433" (driver="docker")
	I1024 19:01:23.914437  479219 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:01:23.914519  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:01:23.914563  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:23.933204  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:24.027191  479219 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:01:24.030950  479219 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:01:24.030993  479219 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:01:24.031012  479219 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:01:24.031030  479219 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:01:24.031045  479219 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/addons for local assets ...
	I1024 19:01:24.031125  479219 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/files for local assets ...
	I1024 19:01:24.031166  479219 start.go:303] post-start completed in 116.727384ms
	I1024 19:01:24.031506  479219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-291433
	I1024 19:01:24.051003  479219 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/config.json ...
	I1024 19:01:24.051512  479219 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:01:24.051595  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:24.073200  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:24.163147  479219 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:01:24.167890  479219 start.go:128] duration metric: createHost completed in 18.266172371s
	I1024 19:01:24.167945  479219 start.go:83] releasing machines lock for "addons-291433", held for 18.266346112s
	I1024 19:01:24.168036  479219 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-291433
	I1024 19:01:24.188999  479219 ssh_runner.go:195] Run: cat /version.json
	I1024 19:01:24.189088  479219 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:01:24.189110  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:24.189145  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:24.207848  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:24.209004  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:24.387881  479219 ssh_runner.go:195] Run: systemctl --version
	I1024 19:01:24.392740  479219 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:01:24.531908  479219 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:01:24.536208  479219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:01:24.554836  479219 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:01:24.554921  479219 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:01:24.587648  479219 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:01:24.587677  479219 start.go:472] detecting cgroup driver to use...
	I1024 19:01:24.587724  479219 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:01:24.587781  479219 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:01:24.606303  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:01:24.619390  479219 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:01:24.619467  479219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:01:24.635892  479219 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:01:24.652937  479219 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:01:24.741655  479219 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:01:24.829111  479219 docker.go:214] disabling docker service ...
	I1024 19:01:24.829179  479219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:01:24.851071  479219 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:01:24.866226  479219 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:01:24.955205  479219 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:01:25.047956  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:01:25.059052  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:01:25.074244  479219 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:01:25.074300  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:25.083829  479219 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:01:25.083927  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:25.093599  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:25.103170  479219 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:01:25.112860  479219 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:01:25.123091  479219 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:01:25.132919  479219 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:01:25.143537  479219 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:01:25.232732  479219 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:01:25.351778  479219 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:01:25.351880  479219 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:01:25.355908  479219 start.go:540] Will wait 60s for crictl version
	I1024 19:01:25.355964  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:01:25.359479  479219 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:01:25.396096  479219 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:01:25.396221  479219 ssh_runner.go:195] Run: crio --version
	I1024 19:01:25.440118  479219 ssh_runner.go:195] Run: crio --version
	I1024 19:01:25.489801  479219 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 19:01:25.491902  479219 cli_runner.go:164] Run: docker network inspect addons-291433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:01:25.512938  479219 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1024 19:01:25.517030  479219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:01:25.527519  479219 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:01:25.527578  479219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:01:25.592062  479219 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:01:25.592087  479219 crio.go:415] Images already preloaded, skipping extraction
	I1024 19:01:25.592134  479219 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:01:25.626601  479219 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:01:25.626623  479219 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:01:25.626687  479219 ssh_runner.go:195] Run: crio config
	I1024 19:01:25.670957  479219 cni.go:84] Creating CNI manager for ""
	I1024 19:01:25.670981  479219 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:01:25.670999  479219 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:01:25.671024  479219 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-291433 NodeName:addons-291433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:01:25.671173  479219 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-291433"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:01:25.671245  479219 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-291433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-291433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:01:25.671298  479219 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:01:25.680503  479219 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:01:25.680585  479219 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:01:25.689354  479219 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1024 19:01:25.710073  479219 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:01:25.731059  479219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1024 19:01:25.753852  479219 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:01:25.759608  479219 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:01:25.773993  479219 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433 for IP: 192.168.49.2
	I1024 19:01:25.774175  479219 certs.go:190] acquiring lock for shared ca certs: {Name:mkd071e4924662af2a94ad3f2018330ff8506826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:25.774390  479219 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key
	I1024 19:01:25.894208  479219 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt ...
	I1024 19:01:25.894243  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt: {Name:mkceba7fb6c8c5f8402811f54ff2c5150057af72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:25.894422  479219 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key ...
	I1024 19:01:25.894433  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key: {Name:mk82d2d54da43b8db7f8912c639aeaf85dc0c8c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:25.894506  479219 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key
	I1024 19:01:25.981464  479219 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt ...
	I1024 19:01:25.981519  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt: {Name:mk0ed04ededa36a50a5a8c2eafb8552d68277ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:25.981724  479219 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key ...
	I1024 19:01:25.981744  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key: {Name:mk9595fba94847c3ddd922ccc22ccf34ab193fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:25.981906  479219 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.key
	I1024 19:01:25.981952  479219 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt with IP's: []
	I1024 19:01:26.058063  479219 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt ...
	I1024 19:01:26.058100  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: {Name:mk5c8210b82b81943a11aca1f9c2d3f492cf3610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:26.058272  479219 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.key ...
	I1024 19:01:26.058279  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.key: {Name:mk331730c1c477a2ae6425981643a35a7ae6edb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:26.058382  479219 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.key.dd3b5fb2
	I1024 19:01:26.058406  479219 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:01:26.154087  479219 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.crt.dd3b5fb2 ...
	I1024 19:01:26.154129  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.crt.dd3b5fb2: {Name:mkbbfe0f9ec0670cb991fbbf66a7fdb7ed31a529 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:26.154300  479219 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.key.dd3b5fb2 ...
	I1024 19:01:26.154312  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.key.dd3b5fb2: {Name:mkd2016a2c668bee4c93848d2112b3e38cf5bf9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:26.154396  479219 certs.go:337] copying /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.crt
	I1024 19:01:26.154485  479219 certs.go:341] copying /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.key
	I1024 19:01:26.154535  479219 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.key
	I1024 19:01:26.154561  479219 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.crt with IP's: []
	I1024 19:01:26.425159  479219 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.crt ...
	I1024 19:01:26.425206  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.crt: {Name:mk02f324ac6bc65150b0d6491b1aefbb82d86472 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:26.425428  479219 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.key ...
	I1024 19:01:26.425447  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.key: {Name:mk12e6f763e870308cb70d6182e62af523f5ed89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:26.425639  479219 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:01:26.425680  479219 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:01:26.425705  479219 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:01:26.425732  479219 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem (1675 bytes)
	I1024 19:01:26.426430  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:01:26.451384  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:01:26.475147  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:01:26.499313  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:01:26.523670  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:01:26.550730  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1024 19:01:26.576717  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:01:26.604664  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:01:26.631706  479219 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:01:26.655794  479219 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:01:26.673884  479219 ssh_runner.go:195] Run: openssl version
	I1024 19:01:26.680172  479219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:01:26.690862  479219 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:26.694941  479219 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:26.695017  479219 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:01:26.702598  479219 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:01:26.715217  479219 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:01:26.718899  479219 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:01:26.718957  479219 kubeadm.go:404] StartCluster: {Name:addons-291433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-291433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:01:26.719070  479219 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:01:26.719139  479219 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:01:26.754164  479219 cri.go:89] found id: ""
	I1024 19:01:26.754232  479219 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:01:26.762763  479219 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:01:26.771058  479219 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 19:01:26.771108  479219 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:01:26.779405  479219 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:01:26.779458  479219 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 19:01:26.831796  479219 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:01:26.831888  479219 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:01:26.876675  479219 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:01:26.876804  479219 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1024 19:01:26.876895  479219 kubeadm.go:322] OS: Linux
	I1024 19:01:26.876975  479219 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 19:01:26.877057  479219 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 19:01:26.877137  479219 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 19:01:26.877226  479219 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 19:01:26.877339  479219 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 19:01:26.877469  479219 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 19:01:26.877632  479219 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1024 19:01:26.877757  479219 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1024 19:01:26.877831  479219 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1024 19:01:26.954589  479219 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:01:26.954716  479219 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:01:26.954815  479219 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:01:27.178640  479219 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:01:27.182298  479219 out.go:204]   - Generating certificates and keys ...
	I1024 19:01:27.182512  479219 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:01:27.182599  479219 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:01:27.265816  479219 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:01:27.348166  479219 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:01:27.464903  479219 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:01:27.640381  479219 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:01:27.787370  479219 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:01:27.787607  479219 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-291433 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:01:27.966086  479219 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:01:27.966228  479219 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-291433 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:01:28.076121  479219 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:01:28.163278  479219 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:01:28.218940  479219 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:01:28.219069  479219 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:01:28.441602  479219 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:01:28.605062  479219 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:01:28.709806  479219 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:01:28.866709  479219 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:01:28.867855  479219 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:01:28.871598  479219 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:01:28.874873  479219 out.go:204]   - Booting up control plane ...
	I1024 19:01:28.875097  479219 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:01:28.875261  479219 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:01:28.875407  479219 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:01:28.885376  479219 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:01:28.886541  479219 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:01:28.886601  479219 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:01:28.966453  479219 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:01:34.968519  479219 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002111 seconds
	I1024 19:01:34.968675  479219 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:01:34.982484  479219 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:01:35.505160  479219 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:01:35.505395  479219 kubeadm.go:322] [mark-control-plane] Marking the node addons-291433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:01:36.017040  479219 kubeadm.go:322] [bootstrap-token] Using token: z25zhq.fbsu3r4ldo7o123t
	I1024 19:01:36.019573  479219 out.go:204]   - Configuring RBAC rules ...
	I1024 19:01:36.019820  479219 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:01:36.027017  479219 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:01:36.038173  479219 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:01:36.042449  479219 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:01:36.046697  479219 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:01:36.050486  479219 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:01:36.065452  479219 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:01:36.299497  479219 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:01:36.452213  479219 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:01:36.454330  479219 kubeadm.go:322] 
	I1024 19:01:36.454545  479219 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:01:36.454562  479219 kubeadm.go:322] 
	I1024 19:01:36.454671  479219 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:01:36.454686  479219 kubeadm.go:322] 
	I1024 19:01:36.454722  479219 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:01:36.454801  479219 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:01:36.454871  479219 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:01:36.454882  479219 kubeadm.go:322] 
	I1024 19:01:36.454956  479219 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:01:36.454965  479219 kubeadm.go:322] 
	I1024 19:01:36.455031  479219 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:01:36.455055  479219 kubeadm.go:322] 
	I1024 19:01:36.455119  479219 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:01:36.455215  479219 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:01:36.455318  479219 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:01:36.455334  479219 kubeadm.go:322] 
	I1024 19:01:36.455438  479219 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:01:36.455555  479219 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:01:36.455570  479219 kubeadm.go:322] 
	I1024 19:01:36.455685  479219 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z25zhq.fbsu3r4ldo7o123t \
	I1024 19:01:36.455809  479219 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 \
	I1024 19:01:36.455848  479219 kubeadm.go:322] 	--control-plane 
	I1024 19:01:36.455862  479219 kubeadm.go:322] 
	I1024 19:01:36.455974  479219 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:01:36.455985  479219 kubeadm.go:322] 
	I1024 19:01:36.456081  479219 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z25zhq.fbsu3r4ldo7o123t \
	I1024 19:01:36.456232  479219 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 
	I1024 19:01:36.461230  479219 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1024 19:01:36.461505  479219 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:01:36.461677  479219 cni.go:84] Creating CNI manager for ""
	I1024 19:01:36.461742  479219 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:01:36.466517  479219 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:01:36.468629  479219 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:01:36.474460  479219 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:01:36.474489  479219 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:01:36.569348  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:01:37.398547  479219 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:01:37.398605  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:37.398651  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=addons-291433 minikube.k8s.io/updated_at=2023_10_24T19_01_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:37.407446  479219 ops.go:34] apiserver oom_adj: -16
	I1024 19:01:37.544441  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:37.653533  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:38.228384  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:38.728326  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:39.227948  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:39.728482  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:40.228427  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:40.728387  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.228720  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:41.728661  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:42.228307  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:42.728818  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:43.228170  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:43.728535  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:44.227871  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:44.728332  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:45.227921  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:45.728206  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:46.228009  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:46.727849  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:47.228344  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:47.728016  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:48.228918  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:48.728197  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:49.228236  479219 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:01:49.309961  479219 kubeadm.go:1081] duration metric: took 11.911408818s to wait for elevateKubeSystemPrivileges.
	I1024 19:01:49.310001  479219 kubeadm.go:406] StartCluster complete in 22.591050188s
	I1024 19:01:49.310022  479219 settings.go:142] acquiring lock: {Name:mk9f191a52d3ce53608a65d0f0798312edc39465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:49.310152  479219 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:01:49.310732  479219 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/kubeconfig: {Name:mkcf54ea0dedcb61df1368dce9070a6aebbbad94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:01:49.310952  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:01:49.311125  479219 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1024 19:01:49.311232  479219 config.go:182] Loaded profile config "addons-291433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:49.311244  479219 addons.go:69] Setting gcp-auth=true in profile "addons-291433"
	I1024 19:01:49.311245  479219 addons.go:69] Setting volumesnapshots=true in profile "addons-291433"
	I1024 19:01:49.311268  479219 addons.go:231] Setting addon volumesnapshots=true in "addons-291433"
	I1024 19:01:49.311266  479219 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-291433"
	I1024 19:01:49.311290  479219 addons.go:69] Setting ingress=true in profile "addons-291433"
	I1024 19:01:49.311298  479219 addons.go:69] Setting registry=true in profile "addons-291433"
	I1024 19:01:49.311315  479219 addons.go:69] Setting ingress-dns=true in profile "addons-291433"
	I1024 19:01:49.311321  479219 addons.go:231] Setting addon ingress=true in "addons-291433"
	I1024 19:01:49.311327  479219 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-291433"
	I1024 19:01:49.311339  479219 addons.go:69] Setting inspektor-gadget=true in profile "addons-291433"
	I1024 19:01:49.311343  479219 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-291433"
	I1024 19:01:49.311348  479219 addons.go:231] Setting addon inspektor-gadget=true in "addons-291433"
	I1024 19:01:49.311352  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.311379  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.311290  479219 addons.go:69] Setting metrics-server=true in profile "addons-291433"
	I1024 19:01:49.311396  479219 addons.go:69] Setting storage-provisioner=true in profile "addons-291433"
	I1024 19:01:49.311394  479219 addons.go:69] Setting cloud-spanner=true in profile "addons-291433"
	I1024 19:01:49.311402  479219 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-291433"
	I1024 19:01:49.311380  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.311418  479219 addons.go:231] Setting addon cloud-spanner=true in "addons-291433"
	I1024 19:01:49.311407  479219 addons.go:231] Setting addon storage-provisioner=true in "addons-291433"
	I1024 19:01:49.311473  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.311494  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.311422  479219 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-291433"
	I1024 19:01:49.311603  479219 mustload.go:65] Loading cluster: addons-291433
	I1024 19:01:49.311869  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311912  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311912  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311925  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311944  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311975  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.312221  479219 config.go:182] Loaded profile config "addons-291433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:01:49.311383  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.311317  479219 addons.go:231] Setting addon registry=true in "addons-291433"
	I1024 19:01:49.312467  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.312480  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.312816  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.312900  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311330  479219 addons.go:231] Setting addon ingress-dns=true in "addons-291433"
	I1024 19:01:49.314684  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.315160  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311283  479219 addons.go:69] Setting default-storageclass=true in profile "addons-291433"
	I1024 19:01:49.315453  479219 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-291433"
	I1024 19:01:49.315775  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311405  479219 addons.go:231] Setting addon metrics-server=true in "addons-291433"
	I1024 19:01:49.317295  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.317750  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.311390  479219 addons.go:69] Setting helm-tiller=true in profile "addons-291433"
	I1024 19:01:49.323205  479219 addons.go:231] Setting addon helm-tiller=true in "addons-291433"
	I1024 19:01:49.323344  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.311385  479219 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-291433"
	I1024 19:01:49.324514  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.325056  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.324106  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.352761  479219 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1024 19:01:49.355562  479219 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1024 19:01:49.355568  479219 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1024 19:01:49.362447  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1024 19:01:49.362526  479219 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:01:49.362528  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.362536  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1024 19:01:49.362572  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.367117  479219 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:01:49.369788  479219 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:01:49.369750  479219 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:01:49.372048  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:01:49.373990  479219 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:01:49.372150  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.377858  479219 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1024 19:01:49.380249  479219 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:01:49.380290  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1024 19:01:49.380364  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.385981  479219 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1024 19:01:49.382151  479219 addons.go:231] Setting addon default-storageclass=true in "addons-291433"
	I1024 19:01:49.387884  479219 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-291433" context rescaled to 1 replicas
	I1024 19:01:49.389874  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.392048  479219 out.go:177]   - Using image docker.io/registry:2.8.3
	I1024 19:01:49.392499  479219 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:01:49.392555  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.393402  479219 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1024 19:01:49.393457  479219 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:01:49.393841  479219 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-291433"
	I1024 19:01:49.396200  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:49.397109  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1024 19:01:49.397119  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.397137  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1024 19:01:49.397463  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:49.398786  479219 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1024 19:01:49.398820  479219 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1024 19:01:49.398982  479219 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1024 19:01:49.399148  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.402620  479219 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1024 19:01:49.404490  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1024 19:01:49.413030  479219 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1024 19:01:49.413069  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1024 19:01:49.413153  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.404436  479219 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1024 19:01:49.413509  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1024 19:01:49.413584  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.421470  479219 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1024 19:01:49.421505  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1024 19:01:49.421617  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.402945  479219 out.go:177] * Verifying Kubernetes components...
	I1024 19:01:49.403203  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1024 19:01:49.425508  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.426333  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:01:49.426401  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.428389  479219 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:01:49.428415  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:01:49.428500  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.428695  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1024 19:01:49.432681  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1024 19:01:49.435128  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1024 19:01:49.437141  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1024 19:01:49.436651  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.439471  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1024 19:01:49.442880  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1024 19:01:49.442530  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.445609  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.450239  479219 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1024 19:01:49.459600  479219 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1024 19:01:49.459635  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1024 19:01:49.459729  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.454080  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.479124  479219 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1024 19:01:49.484613  479219 out.go:177]   - Using image docker.io/busybox:stable
	I1024 19:01:49.486559  479219 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:01:49.486582  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1024 19:01:49.486640  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.489244  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.494781  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.495026  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.502106  479219 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:01:49.502137  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:01:49.502205  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:49.507768  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.508328  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.512385  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.514770  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.522886  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:49.755548  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:01:49.756920  479219 node_ready.go:35] waiting up to 6m0s for node "addons-291433" to be "Ready" ...
	I1024 19:01:49.945205  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:01:49.947765  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1024 19:01:49.949664  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:01:49.959345  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:01:50.057555  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:01:50.142294  479219 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1024 19:01:50.142344  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1024 19:01:50.142887  479219 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1024 19:01:50.142937  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1024 19:01:50.157275  479219 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1024 19:01:50.157310  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1024 19:01:50.163836  479219 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:01:50.163868  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1024 19:01:50.245759  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:01:50.249636  479219 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1024 19:01:50.249665  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1024 19:01:50.446183  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:01:50.457794  479219 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1024 19:01:50.457892  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1024 19:01:50.458739  479219 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1024 19:01:50.458790  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1024 19:01:50.550817  479219 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:01:50.550947  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1024 19:01:50.552644  479219 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1024 19:01:50.552738  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1024 19:01:50.559200  479219 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1024 19:01:50.559299  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1024 19:01:50.561100  479219 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:01:50.561135  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:01:51.052745  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:01:51.143816  479219 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1024 19:01:51.143956  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1024 19:01:51.153680  479219 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1024 19:01:51.153722  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1024 19:01:51.156320  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1024 19:01:51.163130  479219 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:01:51.163252  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:01:51.452955  479219 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1024 19:01:51.452986  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1024 19:01:51.546072  479219 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1024 19:01:51.546116  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1024 19:01:51.742044  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:01:51.745520  479219 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1024 19:01:51.745656  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1024 19:01:51.864247  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:01:52.057698  479219 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1024 19:01:52.057731  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1024 19:01:52.343237  479219 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:01:52.343270  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1024 19:01:52.354685  479219 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1024 19:01:52.354724  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1024 19:01:52.744649  479219 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1024 19:01:52.744684  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1024 19:01:52.760266  479219 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1024 19:01:52.760373  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1024 19:01:52.842496  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:01:53.141858  479219 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.38619798s)
	I1024 19:01:53.141960  479219 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1024 19:01:53.242185  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.296928007s)
	I1024 19:01:53.359012  479219 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1024 19:01:53.359048  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1024 19:01:53.457816  479219 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1024 19:01:53.457846  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1024 19:01:53.648966  479219 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:01:53.649002  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1024 19:01:53.763318  479219 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1024 19:01:53.763347  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1024 19:01:53.855201  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:01:53.968610  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:01:54.057460  479219 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1024 19:01:54.057557  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1024 19:01:54.443136  479219 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1024 19:01:54.443192  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1024 19:01:54.645704  479219 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:01:54.645794  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1024 19:01:54.743211  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.795381017s)
	I1024 19:01:54.856831  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:01:56.249741  479219 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1024 19:01:56.249878  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:56.273349  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:56.466384  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:01:56.642269  479219 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1024 19:01:56.674422  479219 addons.go:231] Setting addon gcp-auth=true in "addons-291433"
	I1024 19:01:56.674478  479219 host.go:66] Checking if "addons-291433" exists ...
	I1024 19:01:56.674857  479219 cli_runner.go:164] Run: docker container inspect addons-291433 --format={{.State.Status}}
	I1024 19:01:56.694131  479219 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1024 19:01:56.694194  479219 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291433
	I1024 19:01:56.712537  479219 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33195 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/addons-291433/id_rsa Username:docker}
	I1024 19:01:57.368771  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.418998146s)
	I1024 19:01:57.368836  479219 addons.go:467] Verifying addon ingress=true in "addons-291433"
	I1024 19:01:57.370937  479219 out.go:177] * Verifying ingress addon...
	I1024 19:01:57.369015  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.409565243s)
	I1024 19:01:57.369085  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.311488272s)
	I1024 19:01:57.369154  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.123354323s)
	I1024 19:01:57.369265  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.923052306s)
	I1024 19:01:57.369310  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.316462201s)
	I1024 19:01:57.369356  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.213001571s)
	I1024 19:01:57.369426  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.627273439s)
	I1024 19:01:57.369553  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.526970699s)
	I1024 19:01:57.369634  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.514327181s)
	I1024 19:01:57.371001  479219 addons.go:467] Verifying addon registry=true in "addons-291433"
	I1024 19:01:57.371062  479219 addons.go:467] Verifying addon metrics-server=true in "addons-291433"
	W1024 19:01:57.371127  479219 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:01:57.373930  479219 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1024 19:01:57.374464  479219 retry.go:31] will retry after 154.89087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:01:57.374428  479219 out.go:177] * Verifying registry addon...
	I1024 19:01:57.376955  479219 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1024 19:01:57.443402  479219 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1024 19:01:57.448231  479219 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1024 19:01:57.448259  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:01:57.448233  479219 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:01:57.448339  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:01:57.453186  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:01:57.453306  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:01:57.529569  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:01:57.957956  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:01:57.958339  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:01:58.460053  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:01:58.460227  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:01:58.749524  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.892558723s)
	I1024 19:01:58.749575  479219 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-291433"
	I1024 19:01:58.749534  479219 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.055367923s)
	I1024 19:01:58.751766  479219 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:01:58.753452  479219 out.go:177] * Verifying csi-hostpath-driver addon...
	I1024 19:01:58.755401  479219 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1024 19:01:58.757401  479219 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1024 19:01:58.757427  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1024 19:01:58.755947  479219 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1024 19:01:58.762098  479219 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:01:58.762134  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:01:58.767242  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:01:58.778344  479219 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1024 19:01:58.778375  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1024 19:01:58.852355  479219 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:01:58.852469  479219 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1024 19:01:58.874096  479219 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:01:58.958418  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:01:58.958824  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:01:58.959267  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:01:58.994664  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.465032545s)
	I1024 19:01:59.346859  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:01:59.460085  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:01:59.461842  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:01:59.846213  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:01:59.964545  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:01:59.964994  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:00.344195  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:00.460162  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:00.462725  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:00.761455  479219 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.887301994s)
	I1024 19:02:00.762917  479219 addons.go:467] Verifying addon gcp-auth=true in "addons-291433"
	I1024 19:02:00.765933  479219 out.go:177] * Verifying gcp-auth addon...
	I1024 19:02:00.768985  479219 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1024 19:02:00.848221  479219 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1024 19:02:00.848364  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:00.849423  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:00.853043  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:00.958523  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:00.959119  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:01.275240  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:01.358284  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:01.459233  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:01.459414  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:01.467644  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:01.773978  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:01.857541  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:01.960345  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:01.961351  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:02.273625  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:02.358127  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:02.458435  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:02.458756  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:02.773138  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:02.858491  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:02.959330  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:02.959771  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:03.272582  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:03.357211  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:03.458518  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.458799  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:03.773241  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:03.857426  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:03.958162  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:03.958439  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:03.959172  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:04.273639  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:04.356889  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:04.458854  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:04.459148  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:04.774212  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:04.857311  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:04.958338  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:04.959587  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:05.273465  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:05.357376  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.460579  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:05.460718  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:05.773536  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:05.857567  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:05.956904  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:05.957350  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:06.273596  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:06.356599  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:06.457753  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:06.458017  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:06.458544  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:06.771514  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:06.857963  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:06.957596  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:06.957822  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:07.274035  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:07.356730  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:07.458110  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:07.458289  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:07.772913  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:07.856873  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:07.957355  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:07.957726  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:08.271980  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:08.356901  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:08.458369  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:08.458733  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:08.459264  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:08.773497  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:08.856541  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:08.958175  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:08.958429  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:09.272487  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:09.357111  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:09.458035  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:09.458294  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:09.772693  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:09.856832  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:09.957623  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:09.957898  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:10.272585  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:10.356408  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:10.458673  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:10.458988  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:10.772848  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:10.857040  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:10.957706  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:10.957738  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:10.958613  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:11.272648  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:11.357294  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:11.458094  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:11.458289  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:11.772191  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:11.857196  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:11.957899  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:11.958150  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:12.272688  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:12.356882  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:12.458942  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:12.459298  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:12.772735  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:12.857438  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:12.957732  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:12.957988  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:12.959071  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:13.271380  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:13.357514  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:13.457317  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:13.457395  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:13.772688  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:13.857476  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:13.957317  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:13.957609  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:14.272544  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:14.357062  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:14.458567  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:14.459102  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:14.775699  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:14.856766  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:14.957380  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:14.957788  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:15.272316  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:15.357700  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:15.457437  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:15.457675  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:15.458777  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:15.773549  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:15.856891  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:15.957939  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:15.958330  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:16.273171  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:16.357000  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:16.458418  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:16.458661  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:16.771675  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:16.856476  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:16.956948  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:16.957159  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:17.274316  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:17.357538  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:17.457346  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:17.457618  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:17.773745  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:17.857051  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:17.957967  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:17.958226  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:17.958903  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:18.273409  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:18.357952  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:18.458399  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:18.458622  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:18.772563  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:18.857576  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:18.957484  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:18.957509  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:19.274025  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:19.357424  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:19.458358  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:19.458552  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:19.774014  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:19.857140  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:19.958071  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:19.958341  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:20.272002  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:20.357645  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:20.457989  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:20.458412  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:20.458596  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:20.773182  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:20.856977  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:20.958290  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:20.958494  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:21.272136  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:21.357158  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:21.458666  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:21.459055  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:21.773880  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:21.856511  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:21.957148  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:21.957554  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:22.272601  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:22.357430  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:22.458241  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:22.458566  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:22.458886  479219 node_ready.go:58] node "addons-291433" has status "Ready":"False"
	I1024 19:02:22.773645  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:22.856558  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:22.958068  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:22.958301  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:23.271328  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:23.356754  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:23.460713  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:23.461436  479219 node_ready.go:49] node "addons-291433" has status "Ready":"True"
	I1024 19:02:23.461468  479219 node_ready.go:38] duration metric: took 33.704457715s waiting for node "addons-291433" to be "Ready" ...
	I1024 19:02:23.461483  479219 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:02:23.461872  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:23.554109  479219 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2k476" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:23.861750  479219 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:02:23.862365  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:23.862308  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:23.958698  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:23.959072  479219 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:02:23.959093  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:24.280070  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:24.358612  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:24.461066  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:24.462100  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:24.774657  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:24.857422  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:24.959676  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:24.960474  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:25.274080  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:25.358250  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:25.458346  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:25.458577  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:25.576088  479219 pod_ready.go:102] pod "coredns-5dd5756b68-2k476" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:25.775134  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:25.861172  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:25.959617  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:25.959658  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:26.079281  479219 pod_ready.go:92] pod "coredns-5dd5756b68-2k476" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:26.079321  479219 pod_ready.go:81] duration metric: took 2.525171892s waiting for pod "coredns-5dd5756b68-2k476" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.079350  479219 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.086105  479219 pod_ready.go:92] pod "etcd-addons-291433" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:26.086140  479219 pod_ready.go:81] duration metric: took 6.776847ms waiting for pod "etcd-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.086160  479219 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.092473  479219 pod_ready.go:92] pod "kube-apiserver-addons-291433" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:26.092520  479219 pod_ready.go:81] duration metric: took 6.35092ms waiting for pod "kube-apiserver-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.092532  479219 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.100271  479219 pod_ready.go:92] pod "kube-controller-manager-addons-291433" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:26.100304  479219 pod_ready.go:81] duration metric: took 7.764887ms waiting for pod "kube-controller-manager-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.100318  479219 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z96s2" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.260085  479219 pod_ready.go:92] pod "kube-proxy-z96s2" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:26.260112  479219 pod_ready.go:81] duration metric: took 159.785695ms waiting for pod "kube-proxy-z96s2" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.260125  479219 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.274920  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:26.359334  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:26.460258  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:26.460969  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:26.660366  479219 pod_ready.go:92] pod "kube-scheduler-addons-291433" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:26.660395  479219 pod_ready.go:81] duration metric: took 400.261735ms waiting for pod "kube-scheduler-addons-291433" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.660418  479219 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:26.773831  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:26.857502  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:26.959817  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:26.960018  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:27.274634  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:27.358915  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:27.457795  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:27.458007  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:27.772429  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:27.857931  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:27.958432  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:27.958581  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:28.274756  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:28.356959  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:28.458047  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:28.458154  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:28.774508  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:28.856931  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:28.961180  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:28.961782  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:28.969531  479219 pod_ready.go:102] pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:29.273933  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:29.358022  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:29.460307  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:29.461182  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:29.773366  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:29.856893  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:29.958801  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:29.958840  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:30.273182  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:30.357972  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:30.459371  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:30.460109  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:30.774604  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:30.857556  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:30.959385  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:30.960357  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:30.969765  479219 pod_ready.go:102] pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:31.275669  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:31.357150  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:31.458825  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:31.459080  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:31.773838  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:31.858459  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:31.958775  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:31.958912  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:32.275638  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:32.357523  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.460171  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:32.460385  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.773776  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:32.857595  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:32.961044  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:32.961275  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:33.277047  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:33.357598  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:33.457872  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:33.458104  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:33.464894  479219 pod_ready.go:102] pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:33.774776  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:33.857785  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:33.958025  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:33.958318  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:34.276053  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:34.359076  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:34.459393  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:34.459398  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:34.775476  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:34.858016  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:34.959041  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:34.959114  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.273308  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:35.357199  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:35.457689  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:35.457740  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:35.466597  479219 pod_ready.go:102] pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:35.773509  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:35.858036  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:35.959091  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:35.959449  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:36.273756  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:36.359410  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:36.459226  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:36.459418  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:36.773811  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:36.857481  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:36.959523  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:36.959730  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:37.275152  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:37.357920  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:37.491572  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:37.491935  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:37.495705  479219 pod_ready.go:102] pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:37.773395  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:37.857688  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:37.958830  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:37.958891  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:38.343553  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:38.363042  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:38.459841  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:38.460140  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:38.469540  479219 pod_ready.go:92] pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace has status "Ready":"True"
	I1024 19:02:38.469595  479219 pod_ready.go:81] duration metric: took 11.809167321s waiting for pod "metrics-server-7c66d45ddc-l55zx" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:38.469614  479219 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace to be "Ready" ...
	I1024 19:02:38.776069  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:38.858773  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:38.958361  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:38.958543  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:39.275283  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:39.357967  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:39.458093  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:39.458358  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:39.773361  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:39.857198  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:39.958331  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:39.958639  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:40.273866  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:40.357824  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:40.459410  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:40.459619  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:40.561563  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:40.773999  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:40.857888  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:40.958851  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:40.959018  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.273276  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:41.357154  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:41.458331  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:41.458357  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.775779  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:41.856767  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:41.959126  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:41.959388  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.273936  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:42.357288  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:42.459442  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:42.459525  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.565363  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:42.774071  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:42.857502  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:42.961031  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:42.961976  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:43.353574  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:43.363713  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:43.463359  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:43.463530  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:43.773159  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:43.858315  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:43.960447  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:43.961332  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:44.344167  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:44.357964  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:44.460090  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:44.460138  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:44.774561  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:44.858547  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:44.958987  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:44.959139  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:45.067460  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:45.274249  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:45.359249  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:45.459827  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:45.459843  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:45.773743  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:45.857391  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:45.960240  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:45.960275  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:46.274299  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:46.357534  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:46.459925  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:46.460133  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:46.774728  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:46.857288  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:46.960127  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:46.960665  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:47.273365  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:47.358833  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:47.459521  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:47.463209  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:47.563884  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:47.775950  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:47.857211  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:47.960107  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:47.960201  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:48.274039  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:48.358211  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:48.458975  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:48.459328  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:48.773716  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:48.857197  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:48.959807  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:48.959882  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.274136  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:49.356944  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.459130  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:49.459291  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.773500  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:49.857843  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:49.959370  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:49.960026  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:50.063042  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:50.276569  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:50.358067  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:50.459546  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:50.459897  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:50.774777  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:50.857753  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:50.958151  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:50.958358  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:51.273229  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:51.358276  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:51.459538  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:51.459631  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:51.775390  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:51.858707  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:51.959185  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:51.959398  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:52.274151  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:52.357718  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:52.459272  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:52.459417  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:52.564256  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:52.775012  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:52.858244  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:52.958847  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:52.959051  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:53.274273  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:53.356884  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:53.461081  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:53.461095  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:53.773321  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:53.856751  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:53.959034  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:53.959214  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:02:54.274729  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:54.357357  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:54.458673  479219 kapi.go:107] duration metric: took 57.081711844s to wait for kubernetes.io/minikube-addons=registry ...
	I1024 19:02:54.458827  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:54.772557  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:54.857577  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:54.958107  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:55.064606  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:55.344637  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:55.362926  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:55.464842  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:55.847077  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:55.857530  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:55.963516  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:56.346783  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:56.363806  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:56.458917  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:56.774751  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:56.858365  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:56.958848  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:57.344598  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:57.358817  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:57.460657  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:57.563573  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:02:57.774016  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:57.858391  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:57.958632  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:58.275219  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:58.358773  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:58.459549  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:58.776969  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:58.857571  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:58.959853  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:59.274770  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:59.358508  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:59.459621  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:02:59.775369  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:02:59.858321  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:02:59.958246  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:00.064867  479219 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"False"
	I1024 19:03:00.273695  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:00.358251  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:00.459917  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:00.774682  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:00.857087  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:00.958495  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:01.274878  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:01.358291  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:01.458963  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:01.563390  479219 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace has status "Ready":"True"
	I1024 19:03:01.563483  479219 pod_ready.go:81] duration metric: took 23.093856892s waiting for pod "nvidia-device-plugin-daemonset-v72v9" in "kube-system" namespace to be "Ready" ...
	I1024 19:03:01.563519  479219 pod_ready.go:38] duration metric: took 38.102016523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:03:01.563576  479219 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:03:01.563636  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:03:01.563722  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:03:01.853118  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:01.858087  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:01.946179  479219 cri.go:89] found id: "cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7"
	I1024 19:03:01.946206  479219 cri.go:89] found id: ""
	I1024 19:03:01.946216  479219 logs.go:284] 1 containers: [cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7]
	I1024 19:03:01.946276  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:01.959881  479219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:03:01.959954  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:03:01.961265  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:02.157566  479219 cri.go:89] found id: "509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52"
	I1024 19:03:02.157595  479219 cri.go:89] found id: ""
	I1024 19:03:02.157606  479219 logs.go:284] 1 containers: [509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52]
	I1024 19:03:02.157673  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:02.162570  479219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:03:02.162711  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:03:02.348838  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:02.361004  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:02.459733  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:02.542091  479219 cri.go:89] found id: "8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f"
	I1024 19:03:02.542131  479219 cri.go:89] found id: ""
	I1024 19:03:02.542141  479219 logs.go:284] 1 containers: [8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f]
	I1024 19:03:02.542207  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:02.561716  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:03:02.561921  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:03:02.862938  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:02.961974  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:03.051197  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:03.054856  479219 cri.go:89] found id: "cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785"
	I1024 19:03:03.054932  479219 cri.go:89] found id: ""
	I1024 19:03:03.054956  479219 logs.go:284] 1 containers: [cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785]
	I1024 19:03:03.055019  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:03.060379  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:03:03.060519  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:03:03.347187  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:03.357873  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:03.357894  479219 cri.go:89] found id: "d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e"
	I1024 19:03:03.357931  479219 cri.go:89] found id: ""
	I1024 19:03:03.357940  479219 logs.go:284] 1 containers: [d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e]
	I1024 19:03:03.358000  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:03.365054  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:03:03.365135  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:03:03.461399  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:03.560850  479219 cri.go:89] found id: "9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290"
	I1024 19:03:03.560879  479219 cri.go:89] found id: ""
	I1024 19:03:03.560890  479219 logs.go:284] 1 containers: [9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290]
	I1024 19:03:03.560949  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:03.564871  479219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:03:03.564951  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:03:03.661159  479219 cri.go:89] found id: "ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c"
	I1024 19:03:03.661186  479219 cri.go:89] found id: ""
	I1024 19:03:03.661198  479219 logs.go:284] 1 containers: [ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c]
	I1024 19:03:03.661256  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:03.665753  479219 logs.go:123] Gathering logs for dmesg ...
	I1024 19:03:03.665798  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:03:03.759862  479219 logs.go:123] Gathering logs for etcd [509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52] ...
	I1024 19:03:03.759909  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52"
	I1024 19:03:03.776498  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:03.857842  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:03.895378  479219 logs.go:123] Gathering logs for kube-scheduler [cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785] ...
	I1024 19:03:03.895420  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785"
	I1024 19:03:03.959630  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:04.061312  479219 logs.go:123] Gathering logs for kindnet [ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c] ...
	I1024 19:03:04.061363  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c"
	I1024 19:03:04.160117  479219 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:03:04.160157  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:03:04.242970  479219 logs.go:123] Gathering logs for container status ...
	I1024 19:03:04.243015  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:03:04.275261  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:04.299854  479219 logs.go:123] Gathering logs for kubelet ...
	I1024 19:03:04.299893  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:03:04.359006  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:04.426571  479219 logs.go:123] Gathering logs for kube-apiserver [cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7] ...
	I1024 19:03:04.426620  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7"
	I1024 19:03:04.461831  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:04.491452  479219 logs.go:123] Gathering logs for coredns [8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f] ...
	I1024 19:03:04.491502  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f"
	I1024 19:03:04.565689  479219 logs.go:123] Gathering logs for kube-proxy [d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e] ...
	I1024 19:03:04.565733  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e"
	I1024 19:03:04.649377  479219 logs.go:123] Gathering logs for kube-controller-manager [9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290] ...
	I1024 19:03:04.649494  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290"
	I1024 19:03:04.720010  479219 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:03:04.720066  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:03:04.773271  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:04.858042  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:04.958875  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:05.274855  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:05.357314  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:05.460191  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:05.772937  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:05.857457  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:05.958843  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:06.274093  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:06.357532  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:06.458254  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:06.773793  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:06.858270  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:06.958419  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:07.274174  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:07.357635  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:07.455831  479219 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:03:07.459543  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:07.474619  479219 api_server.go:72] duration metric: took 1m18.077469305s to wait for apiserver process to appear ...
	I1024 19:03:07.474647  479219 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:03:07.474680  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:03:07.474728  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:03:07.553755  479219 cri.go:89] found id: "cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7"
	I1024 19:03:07.553777  479219 cri.go:89] found id: ""
	I1024 19:03:07.553787  479219 logs.go:284] 1 containers: [cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7]
	I1024 19:03:07.553853  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:07.558749  479219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:03:07.558863  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:03:07.641523  479219 cri.go:89] found id: "509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52"
	I1024 19:03:07.641548  479219 cri.go:89] found id: ""
	I1024 19:03:07.641559  479219 logs.go:284] 1 containers: [509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52]
	I1024 19:03:07.641629  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:07.645340  479219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:03:07.645428  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:03:07.681329  479219 cri.go:89] found id: "8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f"
	I1024 19:03:07.681359  479219 cri.go:89] found id: ""
	I1024 19:03:07.681369  479219 logs.go:284] 1 containers: [8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f]
	I1024 19:03:07.681428  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:07.685919  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:03:07.685998  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:03:07.763085  479219 cri.go:89] found id: "cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785"
	I1024 19:03:07.763115  479219 cri.go:89] found id: ""
	I1024 19:03:07.763129  479219 logs.go:284] 1 containers: [cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785]
	I1024 19:03:07.763191  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:07.768358  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:03:07.768488  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:03:07.775097  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:07.848871  479219 cri.go:89] found id: "d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e"
	I1024 19:03:07.848892  479219 cri.go:89] found id: ""
	I1024 19:03:07.848902  479219 logs.go:284] 1 containers: [d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e]
	I1024 19:03:07.848950  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:07.852881  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:03:07.852961  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:03:07.857479  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:07.897131  479219 cri.go:89] found id: "9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290"
	I1024 19:03:07.897177  479219 cri.go:89] found id: ""
	I1024 19:03:07.897190  479219 logs.go:284] 1 containers: [9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290]
	I1024 19:03:07.897267  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:07.901347  479219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:03:07.901415  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:03:07.949869  479219 cri.go:89] found id: "ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c"
	I1024 19:03:07.949905  479219 cri.go:89] found id: ""
	I1024 19:03:07.949919  479219 logs.go:284] 1 containers: [ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c]
	I1024 19:03:07.949983  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:07.953856  479219 logs.go:123] Gathering logs for kube-controller-manager [9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290] ...
	I1024 19:03:07.953882  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290"
	I1024 19:03:07.957558  479219 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:03:08.017989  479219 logs.go:123] Gathering logs for kindnet [ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c] ...
	I1024 19:03:08.018031  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c"
	I1024 19:03:08.081293  479219 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:03:08.081329  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:03:08.225383  479219 logs.go:123] Gathering logs for kubelet ...
	I1024 19:03:08.225431  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:03:08.273635  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:08.315292  479219 logs.go:123] Gathering logs for dmesg ...
	I1024 19:03:08.315347  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:03:08.358622  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:08.360358  479219 logs.go:123] Gathering logs for kube-apiserver [cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7] ...
	I1024 19:03:08.360398  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7"
	I1024 19:03:08.460493  479219 kapi.go:107] duration metric: took 1m11.086557458s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1024 19:03:08.471072  479219 logs.go:123] Gathering logs for etcd [509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52] ...
	I1024 19:03:08.471128  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52"
	I1024 19:03:08.592494  479219 logs.go:123] Gathering logs for coredns [8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f] ...
	I1024 19:03:08.592538  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f"
	I1024 19:03:08.754292  479219 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:03:08.754346  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:03:08.776167  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:08.858162  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:09.117865  479219 logs.go:123] Gathering logs for kube-scheduler [cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785] ...
	I1024 19:03:09.117913  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785"
	I1024 19:03:09.174987  479219 logs.go:123] Gathering logs for kube-proxy [d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e] ...
	I1024 19:03:09.175031  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e"
	I1024 19:03:09.224228  479219 logs.go:123] Gathering logs for container status ...
	I1024 19:03:09.224268  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:03:09.277856  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:09.467148  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:09.776930  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:09.858833  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:10.278112  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:10.357779  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:10.778815  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:10.858877  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:03:11.273691  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:11.360667  479219 kapi.go:107] duration metric: took 1m10.591674747s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1024 19:03:11.363402  479219 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-291433 cluster.
	I1024 19:03:11.366319  479219 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1024 19:03:11.368526  479219 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1024 19:03:11.775654  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:11.785361  479219 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1024 19:03:11.846611  479219 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1024 19:03:11.848453  479219 api_server.go:141] control plane version: v1.28.3
	I1024 19:03:11.848497  479219 api_server.go:131] duration metric: took 4.373841544s to wait for apiserver health ...
	I1024 19:03:11.848510  479219 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:03:11.848537  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:03:11.848610  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:03:12.049510  479219 cri.go:89] found id: "cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7"
	I1024 19:03:12.049588  479219 cri.go:89] found id: ""
	I1024 19:03:12.049603  479219 logs.go:284] 1 containers: [cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7]
	I1024 19:03:12.049665  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:12.054950  479219 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:03:12.055032  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:03:12.179517  479219 cri.go:89] found id: "509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52"
	I1024 19:03:12.179564  479219 cri.go:89] found id: ""
	I1024 19:03:12.179580  479219 logs.go:284] 1 containers: [509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52]
	I1024 19:03:12.179658  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:12.185862  479219 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:03:12.185942  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:03:12.275973  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:12.445646  479219 cri.go:89] found id: "8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f"
	I1024 19:03:12.445677  479219 cri.go:89] found id: ""
	I1024 19:03:12.445689  479219 logs.go:284] 1 containers: [8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f]
	I1024 19:03:12.445747  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:12.451632  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:03:12.451708  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:03:12.562627  479219 cri.go:89] found id: "cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785"
	I1024 19:03:12.562652  479219 cri.go:89] found id: ""
	I1024 19:03:12.562663  479219 logs.go:284] 1 containers: [cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785]
	I1024 19:03:12.562727  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:12.567541  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:03:12.567619  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:03:12.664203  479219 cri.go:89] found id: "d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e"
	I1024 19:03:12.664245  479219 cri.go:89] found id: ""
	I1024 19:03:12.664258  479219 logs.go:284] 1 containers: [d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e]
	I1024 19:03:12.664359  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:12.669133  479219 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:03:12.669233  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:03:12.756608  479219 cri.go:89] found id: "9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290"
	I1024 19:03:12.756643  479219 cri.go:89] found id: ""
	I1024 19:03:12.756655  479219 logs.go:284] 1 containers: [9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290]
	I1024 19:03:12.756721  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:12.760576  479219 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:03:12.760660  479219 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:03:12.774820  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:12.851185  479219 cri.go:89] found id: "ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c"
	I1024 19:03:12.851219  479219 cri.go:89] found id: ""
	I1024 19:03:12.851232  479219 logs.go:284] 1 containers: [ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c]
	I1024 19:03:12.851296  479219 ssh_runner.go:195] Run: which crictl
	I1024 19:03:12.857427  479219 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:03:12.857462  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:03:13.008425  479219 logs.go:123] Gathering logs for kube-apiserver [cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7] ...
	I1024 19:03:13.008460  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7"
	I1024 19:03:13.071275  479219 logs.go:123] Gathering logs for etcd [509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52] ...
	I1024 19:03:13.071316  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52"
	I1024 19:03:13.184581  479219 logs.go:123] Gathering logs for coredns [8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f] ...
	I1024 19:03:13.184657  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f"
	I1024 19:03:13.248446  479219 logs.go:123] Gathering logs for container status ...
	I1024 19:03:13.248499  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:03:13.275185  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:13.349862  479219 logs.go:123] Gathering logs for dmesg ...
	I1024 19:03:13.349904  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:03:13.384935  479219 logs.go:123] Gathering logs for kube-scheduler [cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785] ...
	I1024 19:03:13.384995  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785"
	I1024 19:03:13.462084  479219 logs.go:123] Gathering logs for kube-proxy [d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e] ...
	I1024 19:03:13.462132  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e"
	I1024 19:03:13.505409  479219 logs.go:123] Gathering logs for kube-controller-manager [9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290] ...
	I1024 19:03:13.505463  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290"
	I1024 19:03:13.599855  479219 logs.go:123] Gathering logs for kindnet [ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c] ...
	I1024 19:03:13.599911  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c"
	I1024 19:03:13.634868  479219 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:03:13.634896  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:03:13.709657  479219 logs.go:123] Gathering logs for kubelet ...
	I1024 19:03:13.709706  479219 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:03:13.774661  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:14.273690  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:14.882137  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:15.274386  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:15.774907  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:16.272667  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:16.310630  479219 system_pods.go:59] 19 kube-system pods found
	I1024 19:03:16.310680  479219 system_pods.go:61] "coredns-5dd5756b68-2k476" [dc8321a0-a64d-4d4e-a65e-4d30bf6bad48] Running
	I1024 19:03:16.310687  479219 system_pods.go:61] "csi-hostpath-attacher-0" [4ce22bc9-0b17-4a1c-90c2-bf4a19bead60] Running
	I1024 19:03:16.310692  479219 system_pods.go:61] "csi-hostpath-resizer-0" [a499245b-8901-4225-a7a1-2074a550ac5e] Running
	I1024 19:03:16.310701  479219 system_pods.go:61] "csi-hostpathplugin-ss74r" [8a331761-9a78-4729-861e-b2ec2e0df33a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1024 19:03:16.310712  479219 system_pods.go:61] "etcd-addons-291433" [94249408-46f6-4862-9a6c-755a967e109b] Running
	I1024 19:03:16.310721  479219 system_pods.go:61] "kindnet-x548h" [9272c221-a409-43f0-8be6-c39900253c9d] Running
	I1024 19:03:16.310726  479219 system_pods.go:61] "kube-apiserver-addons-291433" [c07a83f9-8e9d-4e24-85fa-582bba7c01a5] Running
	I1024 19:03:16.310735  479219 system_pods.go:61] "kube-controller-manager-addons-291433" [89390c24-cfa9-4ac1-b0e8-d90beaf08ebc] Running
	I1024 19:03:16.310745  479219 system_pods.go:61] "kube-ingress-dns-minikube" [d983df13-f989-4bf4-b445-347a5d5cba02] Running
	I1024 19:03:16.310750  479219 system_pods.go:61] "kube-proxy-z96s2" [ec8aa2e4-34aa-427c-8ac7-9c00a361e0a0] Running
	I1024 19:03:16.310759  479219 system_pods.go:61] "kube-scheduler-addons-291433" [57c7f413-4a7f-49e1-9dcf-f76f513f45de] Running
	I1024 19:03:16.310765  479219 system_pods.go:61] "metrics-server-7c66d45ddc-l55zx" [73994317-9cc6-4c99-b4dd-cac48cecc00d] Running
	I1024 19:03:16.310773  479219 system_pods.go:61] "nvidia-device-plugin-daemonset-v72v9" [6d61b791-a0a9-4ca6-bc8b-eb4e7f63c5e4] Running
	I1024 19:03:16.310778  479219 system_pods.go:61] "registry-proxy-bgxs6" [00eacd34-1a93-4ccc-85e2-7605a5e16b4e] Running
	I1024 19:03:16.310785  479219 system_pods.go:61] "registry-t6vg5" [9e38b9b1-7def-4c43-a353-22ddf6cbe203] Running
	I1024 19:03:16.310790  479219 system_pods.go:61] "snapshot-controller-58dbcc7b99-dq7m9" [05ce2c93-4129-4115-a7be-a8d482bdfbf3] Running
	I1024 19:03:16.310799  479219 system_pods.go:61] "snapshot-controller-58dbcc7b99-l6t24" [52437a5e-0570-4fa1-84b9-e0fc4af519ab] Running
	I1024 19:03:16.310804  479219 system_pods.go:61] "storage-provisioner" [878fc22d-69f4-4676-a2be-96ee3066c657] Running
	I1024 19:03:16.310811  479219 system_pods.go:61] "tiller-deploy-7b677967b9-hjbd2" [83360eb6-06d9-43a5-892f-887cc8587848] Running
	I1024 19:03:16.310820  479219 system_pods.go:74] duration metric: took 4.462302982s to wait for pod list to return data ...
	I1024 19:03:16.310834  479219 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:03:16.314285  479219 default_sa.go:45] found service account: "default"
	I1024 19:03:16.314323  479219 default_sa.go:55] duration metric: took 3.477986ms for default service account to be created ...
	I1024 19:03:16.314336  479219 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:03:16.331303  479219 system_pods.go:86] 19 kube-system pods found
	I1024 19:03:16.331367  479219 system_pods.go:89] "coredns-5dd5756b68-2k476" [dc8321a0-a64d-4d4e-a65e-4d30bf6bad48] Running
	I1024 19:03:16.331378  479219 system_pods.go:89] "csi-hostpath-attacher-0" [4ce22bc9-0b17-4a1c-90c2-bf4a19bead60] Running
	I1024 19:03:16.331386  479219 system_pods.go:89] "csi-hostpath-resizer-0" [a499245b-8901-4225-a7a1-2074a550ac5e] Running
	I1024 19:03:16.331401  479219 system_pods.go:89] "csi-hostpathplugin-ss74r" [8a331761-9a78-4729-861e-b2ec2e0df33a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1024 19:03:16.331413  479219 system_pods.go:89] "etcd-addons-291433" [94249408-46f6-4862-9a6c-755a967e109b] Running
	I1024 19:03:16.331422  479219 system_pods.go:89] "kindnet-x548h" [9272c221-a409-43f0-8be6-c39900253c9d] Running
	I1024 19:03:16.331448  479219 system_pods.go:89] "kube-apiserver-addons-291433" [c07a83f9-8e9d-4e24-85fa-582bba7c01a5] Running
	I1024 19:03:16.331455  479219 system_pods.go:89] "kube-controller-manager-addons-291433" [89390c24-cfa9-4ac1-b0e8-d90beaf08ebc] Running
	I1024 19:03:16.331476  479219 system_pods.go:89] "kube-ingress-dns-minikube" [d983df13-f989-4bf4-b445-347a5d5cba02] Running
	I1024 19:03:16.331483  479219 system_pods.go:89] "kube-proxy-z96s2" [ec8aa2e4-34aa-427c-8ac7-9c00a361e0a0] Running
	I1024 19:03:16.331509  479219 system_pods.go:89] "kube-scheduler-addons-291433" [57c7f413-4a7f-49e1-9dcf-f76f513f45de] Running
	I1024 19:03:16.331518  479219 system_pods.go:89] "metrics-server-7c66d45ddc-l55zx" [73994317-9cc6-4c99-b4dd-cac48cecc00d] Running
	I1024 19:03:16.331525  479219 system_pods.go:89] "nvidia-device-plugin-daemonset-v72v9" [6d61b791-a0a9-4ca6-bc8b-eb4e7f63c5e4] Running
	I1024 19:03:16.331533  479219 system_pods.go:89] "registry-proxy-bgxs6" [00eacd34-1a93-4ccc-85e2-7605a5e16b4e] Running
	I1024 19:03:16.331549  479219 system_pods.go:89] "registry-t6vg5" [9e38b9b1-7def-4c43-a353-22ddf6cbe203] Running
	I1024 19:03:16.331557  479219 system_pods.go:89] "snapshot-controller-58dbcc7b99-dq7m9" [05ce2c93-4129-4115-a7be-a8d482bdfbf3] Running
	I1024 19:03:16.331574  479219 system_pods.go:89] "snapshot-controller-58dbcc7b99-l6t24" [52437a5e-0570-4fa1-84b9-e0fc4af519ab] Running
	I1024 19:03:16.331581  479219 system_pods.go:89] "storage-provisioner" [878fc22d-69f4-4676-a2be-96ee3066c657] Running
	I1024 19:03:16.331599  479219 system_pods.go:89] "tiller-deploy-7b677967b9-hjbd2" [83360eb6-06d9-43a5-892f-887cc8587848] Running
	I1024 19:03:16.331614  479219 system_pods.go:126] duration metric: took 17.268854ms to wait for k8s-apps to be running ...
	I1024 19:03:16.331634  479219 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:03:16.331751  479219 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:03:16.358021  479219 system_svc.go:56] duration metric: took 26.371409ms WaitForService to wait for kubelet.
	I1024 19:03:16.358064  479219 kubeadm.go:581] duration metric: took 1m26.960921575s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:03:16.358104  479219 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:03:16.362339  479219 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:03:16.362393  479219 node_conditions.go:123] node cpu capacity is 8
	I1024 19:03:16.362414  479219 node_conditions.go:105] duration metric: took 4.303898ms to run NodePressure ...
	I1024 19:03:16.362430  479219 start.go:228] waiting for startup goroutines ...
	I1024 19:03:16.775851  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:17.276373  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:17.773824  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:18.273117  479219 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:03:18.775443  479219 kapi.go:107] duration metric: took 1m20.019483977s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1024 19:03:18.777903  479219 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1024 19:03:18.780008  479219 addons.go:502] enable addons completed in 1m29.46886736s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns helm-tiller inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1024 19:03:18.780072  479219 start.go:233] waiting for cluster config update ...
	I1024 19:03:18.780096  479219 start.go:242] writing updated cluster config ...
	I1024 19:03:18.780389  479219 ssh_runner.go:195] Run: rm -f paused
	I1024 19:03:18.842838  479219 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:03:18.845279  479219 out.go:177] * Done! kubectl is now configured to use "addons-291433" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:06:11 addons-291433 crio[952]: time="2023-10-24 19:06:11.970959470Z" level=info msg="Removing container: 968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905" id=fcf1e162-043e-4109-848b-721c77abb7e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.078637037Z" level=info msg="Removed container 968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=fcf1e162-043e-4109-848b-721c77abb7e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.136520476Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6" id=1b95ea61-5bcc-44b7-8c7b-1b68461f8686 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.137676706Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=fe7ec123-2441-40c7-aa85-147c820b7000 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.139046682Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=fe7ec123-2441-40c7-aa85-147c820b7000 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.140197237Z" level=info msg="Creating container: default/hello-world-app-5d77478584-9l9fz/hello-world-app" id=977ad82b-97e1-40d1-9710-c6c4fcc8c6ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.140329039Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.248947098Z" level=info msg="Created container 2874ffdaab677e92c1ae7b5b8922a0e7584daf7b2726d46fec64dbdf7b4c3002: default/hello-world-app-5d77478584-9l9fz/hello-world-app" id=977ad82b-97e1-40d1-9710-c6c4fcc8c6ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.249863682Z" level=info msg="Starting container: 2874ffdaab677e92c1ae7b5b8922a0e7584daf7b2726d46fec64dbdf7b4c3002" id=c78e0591-9062-46c7-856e-021671a637bf name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:06:12 addons-291433 crio[952]: time="2023-10-24 19:06:12.260699965Z" level=info msg="Started container" PID=11090 containerID=2874ffdaab677e92c1ae7b5b8922a0e7584daf7b2726d46fec64dbdf7b4c3002 description=default/hello-world-app-5d77478584-9l9fz/hello-world-app id=c78e0591-9062-46c7-856e-021671a637bf name=/runtime.v1.RuntimeService/StartContainer sandboxID=64b61b129da4bdcc7b5dc37b04a85b39270b54603a42b26394d7f9cf29ba4c90
	Oct 24 19:06:13 addons-291433 crio[952]: time="2023-10-24 19:06:13.975150347Z" level=info msg="Stopping container: d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4 (timeout: 2s)" id=d5074bde-0364-4632-ab4a-37c00ab2f5fc name=/runtime.v1.RuntimeService/StopContainer
	Oct 24 19:06:15 addons-291433 crio[952]: time="2023-10-24 19:06:15.985845407Z" level=warning msg="Stopping container d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=d5074bde-0364-4632-ab4a-37c00ab2f5fc name=/runtime.v1.RuntimeService/StopContainer
	Oct 24 19:06:16 addons-291433 conmon[6225]: conmon d73fcc78934d4bdd0ff4 <ninfo>: container 6237 exited with status 137
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.147516804Z" level=info msg="Stopped container d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4: ingress-nginx/ingress-nginx-controller-6f48fc54bd-lll66/controller" id=d5074bde-0364-4632-ab4a-37c00ab2f5fc name=/runtime.v1.RuntimeService/StopContainer
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.148417947Z" level=info msg="Stopping pod sandbox: b8604cb894e972b7007b121875c8ef389c7997fb241062d608f2eafe5ac3f157" id=455a5224-5416-463d-b2f1-bb13e00e2397 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.153880839Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-GABP6UC2JXHCFTWU - [0:0]\n:KUBE-HP-RFIPG6L22WAFYFMW - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-GABP6UC2JXHCFTWU\n-X KUBE-HP-RFIPG6L22WAFYFMW\nCOMMIT\n"
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.156504527Z" level=info msg="Closing host port tcp:80"
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.156586999Z" level=info msg="Closing host port tcp:443"
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.158666611Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.158717637Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.159001957Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-6f48fc54bd-lll66 Namespace:ingress-nginx ID:b8604cb894e972b7007b121875c8ef389c7997fb241062d608f2eafe5ac3f157 UID:7726c5f2-392c-4758-8e64-5835b0b90009 NetNS:/var/run/netns/cb9e4926-1738-474d-997d-83d2d2a43dbc Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.159211101Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-6f48fc54bd-lll66 from CNI network \"kindnet\" (type=ptp)"
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.202438413Z" level=info msg="Stopped pod sandbox: b8604cb894e972b7007b121875c8ef389c7997fb241062d608f2eafe5ac3f157" id=455a5224-5416-463d-b2f1-bb13e00e2397 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 24 19:06:16 addons-291433 crio[952]: time="2023-10-24 19:06:16.986895215Z" level=info msg="Removing container: d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4" id=9b94cfdd-5518-4caf-bf94-9b46084219e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 24 19:06:17 addons-291433 crio[952]: time="2023-10-24 19:06:17.004691268Z" level=info msg="Removed container d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4: ingress-nginx/ingress-nginx-controller-6f48fc54bd-lll66/controller" id=9b94cfdd-5518-4caf-bf94-9b46084219e7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2874ffdaab677       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6                      8 seconds ago       Running             hello-world-app           0                   64b61b129da4b       hello-world-app-5d77478584-9l9fz
	8d5bee9677beb       docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf                              2 minutes ago       Running             nginx                     0                   1cd79dbde2715       nginx
	a1401b3d36957       ghcr.io/headlamp-k8s/headlamp@sha256:0fff6ba0a2a449e3948274f09640fd1f917b038a1100e6fe78ce401be75584c4                        2 minutes ago       Running             headlamp                  0                   5970447d74c54       headlamp-94b766c-4js4r
	fde8200b1e642       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   52c8e8f83a4e0       gcp-auth-d4c87556c-mcqhc
	d61e8cbdff460       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              patch                     0                   2dd07f255916a       ingress-nginx-admission-patch-vtwmd
	e6df07f27190b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   99dc105590875       ingress-nginx-admission-create-v8zml
	4009fa64132d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   94f3c4663ffa2       storage-provisioner
	8cd3fe7d3a733       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   afbfaf2764435       coredns-5dd5756b68-2k476
	d2fa01a6a5f0e       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                                             4 minutes ago       Running             kube-proxy                0                   f4721c6b9a830       kube-proxy-z96s2
	ef2e60d35afef       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                                             4 minutes ago       Running             kindnet-cni               0                   43e2655485ec5       kindnet-x548h
	cce3baa61b6e0       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                                             4 minutes ago       Running             kube-scheduler            0                   8acccc7b35906       kube-scheduler-addons-291433
	509537dc21663       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   a1d5c73ed2dfb       etcd-addons-291433
	cf7372dff0d89       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                                             4 minutes ago       Running             kube-apiserver            0                   5b7fae720dead       kube-apiserver-addons-291433
	9d84fdd4dac04       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                                             4 minutes ago       Running             kube-controller-manager   0                   90345dad3029c       kube-controller-manager-addons-291433
	
	* 
	* ==> coredns [8cd3fe7d3a733f11eb1bafbd678d1003f5fa926c6a9f3747d8cf487fd51fd84f] <==
	* [INFO] 10.244.0.17:53019 - 0 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099851s
	[INFO] 10.244.0.17:40800 - 8090 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005230256s
	[INFO] 10.244.0.17:40800 - 9623 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00912177s
	[INFO] 10.244.0.17:36939 - 27971 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006921795s
	[INFO] 10.244.0.17:36939 - 326 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007082427s
	[INFO] 10.244.0.17:33373 - 61551 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005475376s
	[INFO] 10.244.0.17:33373 - 59499 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.009695093s
	[INFO] 10.244.0.17:51795 - 27711 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078621s
	[INFO] 10.244.0.17:51795 - 7170 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145193s
	[INFO] 10.244.0.20:60953 - 8750 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000195138s
	[INFO] 10.244.0.20:57254 - 32852 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216474s
	[INFO] 10.244.0.20:42387 - 25730 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119813s
	[INFO] 10.244.0.20:44070 - 60592 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191446s
	[INFO] 10.244.0.20:35847 - 60284 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000107247s
	[INFO] 10.244.0.20:49400 - 25693 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122221s
	[INFO] 10.244.0.20:55270 - 58863 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.009635177s
	[INFO] 10.244.0.20:50454 - 908 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.009734782s
	[INFO] 10.244.0.20:60646 - 32597 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008751446s
	[INFO] 10.244.0.20:46561 - 2536 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.013115974s
	[INFO] 10.244.0.20:38959 - 54041 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008824669s
	[INFO] 10.244.0.20:46637 - 56424 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008972946s
	[INFO] 10.244.0.20:51423 - 43926 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001014744s
	[INFO] 10.244.0.20:53032 - 1553 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001095646s
	[INFO] 10.244.0.23:41977 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000269557s
	[INFO] 10.244.0.23:41061 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165543s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-291433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-291433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=addons-291433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_01_37_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-291433
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:01:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-291433
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:06:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:04:10 +0000   Tue, 24 Oct 2023 19:01:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:04:10 +0000   Tue, 24 Oct 2023 19:01:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:04:10 +0000   Tue, 24 Oct 2023 19:01:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:04:10 +0000   Tue, 24 Oct 2023 19:02:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-291433
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	System Info:
	  Machine ID:                 2193b3451a3f46d18d05242dd1475cfd
	  System UUID:                289c84d4-47ea-463c-8020-b3e220225829
	  Boot ID:                    f78507ce-bb13-4a64-bee1-5d653b27f216
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-9l9fz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  gcp-auth                    gcp-auth-d4c87556c-mcqhc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  headlamp                    headlamp-94b766c-4js4r                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 coredns-5dd5756b68-2k476                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m32s
	  kube-system                 etcd-addons-291433                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m45s
	  kube-system                 kindnet-x548h                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m32s
	  kube-system                 kube-apiserver-addons-291433             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-controller-manager-addons-291433    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-proxy-z96s2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-scheduler-addons-291433             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node addons-291433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node addons-291433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x8 over 4m52s)  kubelet          Node addons-291433 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s                  kubelet          Node addons-291433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s                  kubelet          Node addons-291433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s                  kubelet          Node addons-291433 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m33s                  node-controller  Node addons-291433 event: Registered Node addons-291433 in Controller
	  Normal  NodeReady                3m58s                  kubelet          Node addons-291433 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 fc a6 06 ef 71 08 06
	[Oct24 18:28] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e e1 dd 7c 23 e4 08 06
	[ +43.077847] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 5a 0c ee 02 dc cb 08 06
	[  +0.922741] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 95 0e bb da a4 08 06
	[  +0.033390] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 4a 25 7d 38 57 80 08 06
	[  +7.819584] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a a5 57 9f ad 2a 08 06
	[Oct24 19:03] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a ee 5e 23 82 d8 b6 bc 01 94 d5 c4 08 00
	[  +1.027069] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a ee 5e 23 82 d8 b6 bc 01 94 d5 c4 08 00
	[Oct24 19:04] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a ee 5e 23 82 d8 b6 bc 01 94 d5 c4 08 00
	[  +4.191586] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a ee 5e 23 82 d8 b6 bc 01 94 d5 c4 08 00
	[  +8.191213] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a ee 5e 23 82 d8 b6 bc 01 94 d5 c4 08 00
	[ +16.126562] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a ee 5e 23 82 d8 b6 bc 01 94 d5 c4 08 00
	[Oct24 19:05] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a ee 5e 23 82 d8 b6 bc 01 94 d5 c4 08 00
	
	* 
	* ==> etcd [509537dc21663980f762e64850311a665ca1f021db82adc440f5343750f5ce52] <==
	* {"level":"warn","ts":"2023-10-24T19:01:54.94707Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.61056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-10-24T19:01:54.950039Z","caller":"traceutil/trace.go:171","msg":"trace[1702513595] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:476; }","duration":"187.580943ms","start":"2023-10-24T19:01:54.762383Z","end":"2023-10-24T19:01:54.949964Z","steps":["trace[1702513595] 'agreement among raft nodes before linearized reading'  (duration: 184.461594ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:01:55.043946Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"180.968636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:01:55.044212Z","caller":"traceutil/trace.go:171","msg":"trace[1342689188] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:480; }","duration":"181.271444ms","start":"2023-10-24T19:01:54.862913Z","end":"2023-10-24T19:01:55.044185Z","steps":["trace[1342689188] 'agreement among raft nodes before linearized reading'  (duration: 180.958407ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:01:55.044262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.08715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:01:55.044298Z","caller":"traceutil/trace.go:171","msg":"trace[2024217292] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:0; response_revision:480; }","duration":"182.139792ms","start":"2023-10-24T19:01:54.862148Z","end":"2023-10-24T19:01:55.044287Z","steps":["trace[2024217292] 'agreement among raft nodes before linearized reading'  (duration: 182.042688ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:01:55.043963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.009084ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-24T19:01:55.044388Z","caller":"traceutil/trace.go:171","msg":"trace[805215052] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:480; }","duration":"193.454406ms","start":"2023-10-24T19:01:54.850924Z","end":"2023-10-24T19:01:55.044378Z","steps":["trace[805215052] 'agreement among raft nodes before linearized reading'  (duration: 192.951378ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:03:09.465214Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.937107ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11683"}
	{"level":"info","ts":"2023-10-24T19:03:09.465299Z","caller":"traceutil/trace.go:171","msg":"trace[1867886071] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1126; }","duration":"109.040828ms","start":"2023-10-24T19:03:09.356241Z","end":"2023-10-24T19:03:09.465282Z","steps":["trace[1867886071] 'range keys from in-memory index tree'  (duration: 108.663976ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:03:14.87785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.005996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:90769"}
	{"level":"info","ts":"2023-10-24T19:03:14.877955Z","caller":"traceutil/trace.go:171","msg":"trace[1623153874] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1170; }","duration":"108.13137ms","start":"2023-10-24T19:03:14.769809Z","end":"2023-10-24T19:03:14.87794Z","steps":["trace[1623153874] 'range keys from in-memory index tree'  (duration: 107.730404ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:25.981179Z","caller":"traceutil/trace.go:171","msg":"trace[496667989] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"115.51639ms","start":"2023-10-24T19:03:25.865636Z","end":"2023-10-24T19:03:25.981152Z","steps":["trace[496667989] 'process raft request'  (duration: 61.751397ms)","trace[496667989] 'compare'  (duration: 53.53257ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:03:31.706535Z","caller":"traceutil/trace.go:171","msg":"trace[1719032991] linearizableReadLoop","detail":"{readStateIndex:1350; appliedIndex:1349; }","duration":"141.297251ms","start":"2023-10-24T19:03:31.565214Z","end":"2023-10-24T19:03:31.706511Z","steps":["trace[1719032991] 'read index received'  (duration: 75.779735ms)","trace[1719032991] 'applied index is now lower than readState.Index'  (duration: 65.516495ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:03:31.706737Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.523469ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2023-10-24T19:03:31.706691Z","caller":"traceutil/trace.go:171","msg":"trace[27969727] transaction","detail":"{read_only:false; response_revision:1308; number_of_response:1; }","duration":"152.807956ms","start":"2023-10-24T19:03:31.553795Z","end":"2023-10-24T19:03:31.706602Z","steps":["trace[27969727] 'process raft request'  (duration: 87.258955ms)","trace[27969727] 'compare'  (duration: 65.303592ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:03:31.70688Z","caller":"traceutil/trace.go:171","msg":"trace[216917387] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1308; }","duration":"141.68349ms","start":"2023-10-24T19:03:31.565178Z","end":"2023-10-24T19:03:31.706861Z","steps":["trace[216917387] 'agreement among raft nodes before linearized reading'  (duration: 141.42748ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:31.928362Z","caller":"traceutil/trace.go:171","msg":"trace[1413084816] linearizableReadLoop","detail":"{readStateIndex:1352; appliedIndex:1351; }","duration":"156.348798ms","start":"2023-10-24T19:03:31.771988Z","end":"2023-10-24T19:03:31.928337Z","steps":["trace[1413084816] 'read index received'  (duration: 72.939466ms)","trace[1413084816] 'applied index is now lower than readState.Index'  (duration: 83.408266ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:03:31.928442Z","caller":"traceutil/trace.go:171","msg":"trace[1472746425] transaction","detail":"{read_only:false; response_revision:1310; number_of_response:1; }","duration":"156.98016ms","start":"2023-10-24T19:03:31.771417Z","end":"2023-10-24T19:03:31.928397Z","steps":["trace[1472746425] 'process raft request'  (duration: 73.496816ms)","trace[1472746425] 'compare'  (duration: 83.278509ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:03:31.928565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.577287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:1 size:635"}
	{"level":"info","ts":"2023-10-24T19:03:31.928642Z","caller":"traceutil/trace.go:171","msg":"trace[1733634847] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:1; response_revision:1310; }","duration":"156.670746ms","start":"2023-10-24T19:03:31.771956Z","end":"2023-10-24T19:03:31.928627Z","steps":["trace[1733634847] 'agreement among raft nodes before linearized reading'  (duration: 156.511375ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:32.053707Z","caller":"traceutil/trace.go:171","msg":"trace[428624563] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1310; }","duration":"122.826099ms","start":"2023-10-24T19:03:31.930859Z","end":"2023-10-24T19:03:32.053685Z","steps":["trace[428624563] 'process raft request'  (duration: 119.756888ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:32.053788Z","caller":"traceutil/trace.go:171","msg":"trace[594300546] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1311; }","duration":"122.567771ms","start":"2023-10-24T19:03:31.931201Z","end":"2023-10-24T19:03:32.053769Z","steps":["trace[594300546] 'process raft request'  (duration: 122.41544ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:03:37.702208Z","caller":"traceutil/trace.go:171","msg":"trace[939379998] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"157.165478ms","start":"2023-10-24T19:03:37.545013Z","end":"2023-10-24T19:03:37.702179Z","steps":["trace[939379998] 'process raft request'  (duration: 94.140493ms)","trace[939379998] 'compare'  (duration: 62.880036ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:04:09.40785Z","caller":"traceutil/trace.go:171","msg":"trace[2140608248] transaction","detail":"{read_only:false; response_revision:1587; number_of_response:1; }","duration":"106.457546ms","start":"2023-10-24T19:04:09.301367Z","end":"2023-10-24T19:04:09.407824Z","steps":["trace[2140608248] 'process raft request'  (duration: 106.311794ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [fde8200b1e64278b788141da41cbc3e1176731d787bd39a252e5baf624e319b2] <==
	* 2023/10/24 19:03:11 GCP Auth Webhook started!
	2023/10/24 19:03:19 Ready to marshal response ...
	2023/10/24 19:03:19 Ready to write response ...
	2023/10/24 19:03:19 Ready to marshal response ...
	2023/10/24 19:03:19 Ready to write response ...
	2023/10/24 19:03:29 Ready to marshal response ...
	2023/10/24 19:03:29 Ready to write response ...
	2023/10/24 19:03:30 Ready to marshal response ...
	2023/10/24 19:03:30 Ready to write response ...
	2023/10/24 19:03:32 Ready to marshal response ...
	2023/10/24 19:03:32 Ready to write response ...
	2023/10/24 19:03:32 Ready to marshal response ...
	2023/10/24 19:03:32 Ready to write response ...
	2023/10/24 19:03:32 Ready to marshal response ...
	2023/10/24 19:03:32 Ready to write response ...
	2023/10/24 19:03:42 Ready to marshal response ...
	2023/10/24 19:03:42 Ready to write response ...
	2023/10/24 19:03:45 Ready to marshal response ...
	2023/10/24 19:03:45 Ready to write response ...
	2023/10/24 19:03:48 Ready to marshal response ...
	2023/10/24 19:03:48 Ready to write response ...
	2023/10/24 19:04:17 Ready to marshal response ...
	2023/10/24 19:04:17 Ready to write response ...
	2023/10/24 19:06:10 Ready to marshal response ...
	2023/10/24 19:06:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:06:21 up  2:48,  0 users,  load average: 0.62, 1.05, 0.93
	Linux addons-291433 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [ef2e60d35afef21eb7cdec525d1db1840b41e30eb653590861cd865ee9b16e6c] <==
	* I1024 19:04:13.308819       1 main.go:227] handling current node
	I1024 19:04:23.322557       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:04:23.322586       1 main.go:227] handling current node
	I1024 19:04:33.327986       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:04:33.328016       1 main.go:227] handling current node
	I1024 19:04:43.342420       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:04:43.342454       1 main.go:227] handling current node
	I1024 19:04:53.356239       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:04:53.356281       1 main.go:227] handling current node
	I1024 19:05:03.369027       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:05:03.369055       1 main.go:227] handling current node
	I1024 19:05:13.373736       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:05:13.373772       1 main.go:227] handling current node
	I1024 19:05:23.384492       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:05:23.384517       1 main.go:227] handling current node
	I1024 19:05:33.389122       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:05:33.389146       1 main.go:227] handling current node
	I1024 19:05:43.400343       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:05:43.400368       1 main.go:227] handling current node
	I1024 19:05:53.405118       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:05:53.405153       1 main.go:227] handling current node
	I1024 19:06:03.418123       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:06:03.418149       1 main.go:227] handling current node
	I1024 19:06:13.423138       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:06:13.423164       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [cf7372dff0d891a10c029c0f1c76af0cb3acba5bad2bcd30abde1971ae43b0d7] <==
	* I1024 19:03:45.263860       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.172.254"}
	E1024 19:03:47.720470       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1024 19:03:53.484676       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.28:37164: read: connection reset by peer
	I1024 19:03:55.608465       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1024 19:04:34.472465       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.472528       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:34.482752       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.482912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:34.487878       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.488038       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:34.493809       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.493859       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:34.501546       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.501642       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:34.508353       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.508403       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:34.551808       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.551891       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:04:34.553132       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:04:34.553169       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1024 19:04:35.494814       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1024 19:04:35.553897       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1024 19:04:35.564501       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1024 19:06:10.320719       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.108.197"}
	E1024 19:06:12.050404       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc009a59cb0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc008060af0), ResponseWriter:(*httpsnoop.rw)(0xc008060af0), Flusher:(*httpsnoop.rw)(0xc008060af0), CloseNotifier:(*httpsnoop.rw)(0xc008060af0), Pusher:(*httpsnoop.rw)(0xc008060af0)}}, encoder:(*versioning.codec)(0xc00324c500), memAllocator:(*runtime.Allocator)(0xc00a67f890)})
	
	* 
	* ==> kube-controller-manager [9d84fdd4dac0421a5fbc22b7c9400fb415afd3c9cb233fba755ffae75f845290] <==
	* W1024 19:05:08.647767       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:08.647811       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:17.523529       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:17.523558       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:17.912542       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:17.912590       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:42.045853       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:42.045893       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:50.406259       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:50.406300       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:05:57.917476       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:05:57.917515       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:06:04.093688       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:06:04.093722       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1024 19:06:10.114534       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1024 19:06:10.145147       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9l9fz"
	I1024 19:06:10.153329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.813999ms"
	I1024 19:06:10.160528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.931905ms"
	I1024 19:06:10.160640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="63.438µs"
	I1024 19:06:10.164616       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="98.173µs"
	I1024 19:06:12.698881       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1024 19:06:12.701447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6f48fc54bd" duration="9.561µs"
	I1024 19:06:12.705310       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1024 19:06:13.000407       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.949581ms"
	I1024 19:06:13.000525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.57µs"
	
	* 
	* ==> kube-proxy [d2fa01a6a5f0e83debc840c716c8b9c385db8a8349f99cbfd93b4c79ad110f3e] <==
	* I1024 19:01:53.859697       1 server_others.go:69] "Using iptables proxy"
	I1024 19:01:54.046651       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1024 19:01:54.455605       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:01:54.556420       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:01:54.556550       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:01:54.556587       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:01:54.556634       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:01:54.556982       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:01:54.557352       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:01:54.558189       1 config.go:188] "Starting service config controller"
	I1024 19:01:54.559161       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:01:54.558688       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:01:54.559292       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:01:54.558698       1 config.go:315] "Starting node config controller"
	I1024 19:01:54.559308       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:01:54.666713       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:01:54.666785       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:01:54.847587       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [cce3baa61b6e07053b28b6d5f9635fb1683d20c3b25c786c5cbe820f01ced785] <==
	* W1024 19:01:33.143496       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:33.143556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1024 19:01:33.145559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:01:33.146157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 19:01:33.145942       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:01:33.146212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:01:33.145967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:33.146250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1024 19:01:33.146770       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:33.146838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1024 19:01:33.147007       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:01:33.147048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 19:01:33.147235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:01:33.147271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1024 19:01:33.940727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:01:33.940757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1024 19:01:33.996687       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:01:33.996722       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:01:34.080043       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:01:34.080073       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1024 19:01:34.091493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:34.091533       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1024 19:01:34.249142       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 19:01:34.249185       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1024 19:01:34.659571       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 24 19:06:10 addons-291433 kubelet[1558]: I1024 19:06:10.344189    1558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4s6\" (UniqueName: \"kubernetes.io/projected/d7ddf494-4476-4248-ad6d-de0afeb3a79a-kube-api-access-hw4s6\") pod \"hello-world-app-5d77478584-9l9fz\" (UID: \"d7ddf494-4476-4248-ad6d-de0afeb3a79a\") " pod="default/hello-world-app-5d77478584-9l9fz"
	Oct 24 19:06:10 addons-291433 kubelet[1558]: I1024 19:06:10.344274    1558 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d7ddf494-4476-4248-ad6d-de0afeb3a79a-gcp-creds\") pod \"hello-world-app-5d77478584-9l9fz\" (UID: \"d7ddf494-4476-4248-ad6d-de0afeb3a79a\") " pod="default/hello-world-app-5d77478584-9l9fz"
	Oct 24 19:06:10 addons-291433 kubelet[1558]: W1024 19:06:10.813862    1558 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/afce13c26fb844e79a252a5377c8862668e2cefb073f8bd458ca6d536c4cf2d6/crio-64b61b129da4bdcc7b5dc37b04a85b39270b54603a42b26394d7f9cf29ba4c90 WatchSource:0}: Error finding container 64b61b129da4bdcc7b5dc37b04a85b39270b54603a42b26394d7f9cf29ba4c90: Status 404 returned error can't find the container with id 64b61b129da4bdcc7b5dc37b04a85b39270b54603a42b26394d7f9cf29ba4c90
	Oct 24 19:06:11 addons-291433 kubelet[1558]: I1024 19:06:11.553613    1558 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjvr\" (UniqueName: \"kubernetes.io/projected/d983df13-f989-4bf4-b445-347a5d5cba02-kube-api-access-htjvr\") pod \"d983df13-f989-4bf4-b445-347a5d5cba02\" (UID: \"d983df13-f989-4bf4-b445-347a5d5cba02\") "
	Oct 24 19:06:11 addons-291433 kubelet[1558]: I1024 19:06:11.556212    1558 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d983df13-f989-4bf4-b445-347a5d5cba02-kube-api-access-htjvr" (OuterVolumeSpecName: "kube-api-access-htjvr") pod "d983df13-f989-4bf4-b445-347a5d5cba02" (UID: "d983df13-f989-4bf4-b445-347a5d5cba02"). InnerVolumeSpecName "kube-api-access-htjvr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 24 19:06:11 addons-291433 kubelet[1558]: I1024 19:06:11.654930    1558 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-htjvr\" (UniqueName: \"kubernetes.io/projected/d983df13-f989-4bf4-b445-347a5d5cba02-kube-api-access-htjvr\") on node \"addons-291433\" DevicePath \"\""
	Oct 24 19:06:11 addons-291433 kubelet[1558]: I1024 19:06:11.969644    1558 scope.go:117] "RemoveContainer" containerID="968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905"
	Oct 24 19:06:12 addons-291433 kubelet[1558]: I1024 19:06:12.079083    1558 scope.go:117] "RemoveContainer" containerID="968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905"
	Oct 24 19:06:12 addons-291433 kubelet[1558]: E1024 19:06:12.079767    1558 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905\": container with ID starting with 968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905 not found: ID does not exist" containerID="968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905"
	Oct 24 19:06:12 addons-291433 kubelet[1558]: I1024 19:06:12.079834    1558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905"} err="failed to get container status \"968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905\": rpc error: code = NotFound desc = could not find container \"968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905\": container with ID starting with 968c14677790ece147308c7c29d261ebc148ac21dc244321277ba9b856af6905 not found: ID does not exist"
	Oct 24 19:06:12 addons-291433 kubelet[1558]: I1024 19:06:12.456474    1558 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d983df13-f989-4bf4-b445-347a5d5cba02" path="/var/lib/kubelet/pods/d983df13-f989-4bf4-b445-347a5d5cba02/volumes"
	Oct 24 19:06:12 addons-291433 kubelet[1558]: I1024 19:06:12.991953    1558 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-9l9fz" podStartSLOduration=1.6725385780000002 podCreationTimestamp="2023-10-24 19:06:10 +0000 UTC" firstStartedPulling="2023-10-24 19:06:10.817666231 +0000 UTC m=+274.565640135" lastFinishedPulling="2023-10-24 19:06:12.136984447 +0000 UTC m=+275.884958354" observedRunningTime="2023-10-24 19:06:12.991493375 +0000 UTC m=+276.739467292" watchObservedRunningTime="2023-10-24 19:06:12.991856797 +0000 UTC m=+276.739830713"
	Oct 24 19:06:14 addons-291433 kubelet[1558]: I1024 19:06:14.455440    1558 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1ea60d59-e054-4731-aeda-8ac491a936d4" path="/var/lib/kubelet/pods/1ea60d59-e054-4731-aeda-8ac491a936d4/volumes"
	Oct 24 19:06:14 addons-291433 kubelet[1558]: I1024 19:06:14.455781    1558 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a9ada6a4-e4ab-4701-b0f1-2b71c2a452bb" path="/var/lib/kubelet/pods/a9ada6a4-e4ab-4701-b0f1-2b71c2a452bb/volumes"
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.391220    1558 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj8dq\" (UniqueName: \"kubernetes.io/projected/7726c5f2-392c-4758-8e64-5835b0b90009-kube-api-access-vj8dq\") pod \"7726c5f2-392c-4758-8e64-5835b0b90009\" (UID: \"7726c5f2-392c-4758-8e64-5835b0b90009\") "
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.391306    1558 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7726c5f2-392c-4758-8e64-5835b0b90009-webhook-cert\") pod \"7726c5f2-392c-4758-8e64-5835b0b90009\" (UID: \"7726c5f2-392c-4758-8e64-5835b0b90009\") "
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.394031    1558 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7726c5f2-392c-4758-8e64-5835b0b90009-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7726c5f2-392c-4758-8e64-5835b0b90009" (UID: "7726c5f2-392c-4758-8e64-5835b0b90009"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.394180    1558 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7726c5f2-392c-4758-8e64-5835b0b90009-kube-api-access-vj8dq" (OuterVolumeSpecName: "kube-api-access-vj8dq") pod "7726c5f2-392c-4758-8e64-5835b0b90009" (UID: "7726c5f2-392c-4758-8e64-5835b0b90009"). InnerVolumeSpecName "kube-api-access-vj8dq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.456205    1558 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7726c5f2-392c-4758-8e64-5835b0b90009" path="/var/lib/kubelet/pods/7726c5f2-392c-4758-8e64-5835b0b90009/volumes"
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.492179    1558 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7726c5f2-392c-4758-8e64-5835b0b90009-webhook-cert\") on node \"addons-291433\" DevicePath \"\""
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.492224    1558 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vj8dq\" (UniqueName: \"kubernetes.io/projected/7726c5f2-392c-4758-8e64-5835b0b90009-kube-api-access-vj8dq\") on node \"addons-291433\" DevicePath \"\""
	Oct 24 19:06:16 addons-291433 kubelet[1558]: I1024 19:06:16.985720    1558 scope.go:117] "RemoveContainer" containerID="d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4"
	Oct 24 19:06:17 addons-291433 kubelet[1558]: I1024 19:06:17.005096    1558 scope.go:117] "RemoveContainer" containerID="d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4"
	Oct 24 19:06:17 addons-291433 kubelet[1558]: E1024 19:06:17.005617    1558 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4\": container with ID starting with d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4 not found: ID does not exist" containerID="d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4"
	Oct 24 19:06:17 addons-291433 kubelet[1558]: I1024 19:06:17.005668    1558 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4"} err="failed to get container status \"d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4\": rpc error: code = NotFound desc = could not find container \"d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4\": container with ID starting with d73fcc78934d4bdd0ff41d5d7acea323cca4b910098655a022fa56601d6b8bf4 not found: ID does not exist"
	
	* 
	* ==> storage-provisioner [4009fa64132d873653f87821321d42140388ea47866836e954c3718b7c09889d] <==
	* I1024 19:02:24.843130       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:02:24.852420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:02:24.852558       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:02:24.861956       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:02:24.862179       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a53462b-2d8e-43d8-82b0-c7e3724f30b6", APIVersion:"v1", ResourceVersion:"896", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-291433_3052f455-b7f1-44aa-b022-16ac417803c4 became leader
	I1024 19:02:24.862217       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-291433_3052f455-b7f1-44aa-b022-16ac417803c4!
	I1024 19:02:24.963225       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-291433_3052f455-b7f1-44aa-b022-16ac417803c4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-291433 -n addons-291433
helpers_test.go:261: (dbg) Run:  kubectl --context addons-291433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (157.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (185.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-462645 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-462645 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.166965798s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-462645 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-462645 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d4c73095-210f-415e-b29b-934d50d55d9c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d4c73095-210f-415e-b29b-934d50d55d9c] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.008081684s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-462645 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1024 19:13:18.869146  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:13:46.557453  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-462645 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.199022414s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-462645 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-462645 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1024 19:14:50.664175  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:50.669603  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:50.680073  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:50.700663  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:50.741190  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:50.821653  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:50.982132  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:51.302899  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:51.943882  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:53.224192  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:14:55.785149  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:15:00.906416  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009441051s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-462645 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-462645 addons disable ingress-dns --alsologtostderr -v=1: (2.219632084s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-462645 addons disable ingress --alsologtostderr -v=1
E1024 19:15:11.147311  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-462645 addons disable ingress --alsologtostderr -v=1: (7.524807334s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-462645
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-462645:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "273a3aa1a5fc6cbd3706abc673be58f2e1d22f67c15d0ba6f683f373becd3358",
	        "Created": "2023-10-24T19:10:57.432349251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519824,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:10:57.772972043Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/273a3aa1a5fc6cbd3706abc673be58f2e1d22f67c15d0ba6f683f373becd3358/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/273a3aa1a5fc6cbd3706abc673be58f2e1d22f67c15d0ba6f683f373becd3358/hostname",
	        "HostsPath": "/var/lib/docker/containers/273a3aa1a5fc6cbd3706abc673be58f2e1d22f67c15d0ba6f683f373becd3358/hosts",
	        "LogPath": "/var/lib/docker/containers/273a3aa1a5fc6cbd3706abc673be58f2e1d22f67c15d0ba6f683f373becd3358/273a3aa1a5fc6cbd3706abc673be58f2e1d22f67c15d0ba6f683f373becd3358-json.log",
	        "Name": "/ingress-addon-legacy-462645",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-462645:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-462645",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/003eaca12a638422eee5c052ae2fd0f6308c5bf22388c65d411bda855d12e226-init/diff:/var/lib/docker/overlay2/a59d6c70e56c008d6cc4bbed94412eb512943c9d608e3d99495b95d6ce6d39c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/003eaca12a638422eee5c052ae2fd0f6308c5bf22388c65d411bda855d12e226/merged",
	                "UpperDir": "/var/lib/docker/overlay2/003eaca12a638422eee5c052ae2fd0f6308c5bf22388c65d411bda855d12e226/diff",
	                "WorkDir": "/var/lib/docker/overlay2/003eaca12a638422eee5c052ae2fd0f6308c5bf22388c65d411bda855d12e226/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-462645",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-462645/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-462645",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-462645",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-462645",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9301b40f989b14bdffdd8030d20995609b047118c71dde0cd9afcd9da1bb8bd0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9301b40f989b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-462645": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "273a3aa1a5fc",
	                        "ingress-addon-legacy-462645"
	                    ],
	                    "NetworkID": "6d33fb53219e66cfe5d2c4e9c3c4573c63f8aa89a671055067412d6957ea397a",
	                    "EndpointID": "76cb595d5ea2314864454a9cdb23548dc469c2a95f15f1731d4e66af233bab56",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-462645 -n ingress-addon-legacy-462645
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-462645 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-462645 logs -n 25: (1.23933477s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-558204                                                   | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-558204                                                   | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-558204                                                   | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount2 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-558204 ssh findmnt                                          | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-558204 ssh findmnt                                          | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-558204 ssh findmnt                                          | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-558204                                                   | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-558204                                                      | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-558204                                                      | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-558204                                                      | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-558204                                                      | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC |                     |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-558204                                                      | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-558204 ssh pgrep                                            | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-558204                                                      | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-558204 image build -t                                       | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | localhost/my-image:functional-558204                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-558204                                                      | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-558204 image ls                                             | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	| delete         | -p functional-558204                                                   | functional-558204           | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:10 UTC |
	| start          | -p ingress-addon-legacy-462645                                         | ingress-addon-legacy-462645 | jenkins | v1.31.2 | 24 Oct 23 19:10 UTC | 24 Oct 23 19:11 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-462645                                            | ingress-addon-legacy-462645 | jenkins | v1.31.2 | 24 Oct 23 19:11 UTC | 24 Oct 23 19:12 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-462645                                            | ingress-addon-legacy-462645 | jenkins | v1.31.2 | 24 Oct 23 19:12 UTC | 24 Oct 23 19:12 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-462645                                            | ingress-addon-legacy-462645 | jenkins | v1.31.2 | 24 Oct 23 19:12 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-462645 ip                                         | ingress-addon-legacy-462645 | jenkins | v1.31.2 | 24 Oct 23 19:14 UTC | 24 Oct 23 19:14 UTC |
	| addons         | ingress-addon-legacy-462645                                            | ingress-addon-legacy-462645 | jenkins | v1.31.2 | 24 Oct 23 19:15 UTC | 24 Oct 23 19:15 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-462645                                            | ingress-addon-legacy-462645 | jenkins | v1.31.2 | 24 Oct 23 19:15 UTC | 24 Oct 23 19:15 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:10:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:10:44.500006  519187 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:10:44.500383  519187 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:44.500394  519187 out.go:309] Setting ErrFile to fd 2...
	I1024 19:10:44.500400  519187 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:44.500717  519187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:10:44.501574  519187 out.go:303] Setting JSON to false
	I1024 19:10:44.503165  519187 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10392,"bootTime":1698164253,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:10:44.503260  519187 start.go:138] virtualization: kvm guest
	I1024 19:10:44.506411  519187 out.go:177] * [ingress-addon-legacy-462645] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:10:44.508633  519187 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:10:44.508590  519187 notify.go:220] Checking for updates...
	I1024 19:10:44.510675  519187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:10:44.512868  519187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:10:44.515016  519187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:10:44.516919  519187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:10:44.519092  519187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:10:44.521150  519187 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:10:44.544568  519187 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:10:44.544681  519187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:10:44.601950  519187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-24 19:10:44.591847891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:10:44.602059  519187 docker.go:295] overlay module found
	I1024 19:10:44.604426  519187 out.go:177] * Using the docker driver based on user configuration
	I1024 19:10:44.606283  519187 start.go:298] selected driver: docker
	I1024 19:10:44.606326  519187 start.go:902] validating driver "docker" against <nil>
	I1024 19:10:44.606342  519187 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:10:44.607284  519187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:10:44.674556  519187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-10-24 19:10:44.664224725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:10:44.674771  519187 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:10:44.675061  519187 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:10:44.677453  519187 out.go:177] * Using Docker driver with root privileges
	I1024 19:10:44.679671  519187 cni.go:84] Creating CNI manager for ""
	I1024 19:10:44.679709  519187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:10:44.679725  519187 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:10:44.679740  519187 start_flags.go:323] config:
	{Name:ingress-addon-legacy-462645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-462645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:10:44.681858  519187 out.go:177] * Starting control plane node ingress-addon-legacy-462645 in cluster ingress-addon-legacy-462645
	I1024 19:10:44.683960  519187 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:10:44.685763  519187 out.go:177] * Pulling base image ...
	I1024 19:10:44.687680  519187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:10:44.687736  519187 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:10:44.706538  519187 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:10:44.706574  519187 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:10:44.752257  519187 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1024 19:10:44.752301  519187 cache.go:57] Caching tarball of preloaded images
	I1024 19:10:44.752516  519187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:10:44.755044  519187 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1024 19:10:44.756861  519187 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:10:44.791328  519187 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1024 19:10:48.677207  519187 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:10:48.677345  519187 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:10:49.716278  519187 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1024 19:10:49.716627  519187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/config.json ...
	I1024 19:10:49.716657  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/config.json: {Name:mk7172bf0a324d92e0372a555734f0e15f5eca0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:10:49.716869  519187 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:10:49.716905  519187 start.go:365] acquiring machines lock for ingress-addon-legacy-462645: {Name:mk940a0d6de0bbf424798c0d6b8393a029e4369b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:10:49.716965  519187 start.go:369] acquired machines lock for "ingress-addon-legacy-462645" in 45.413µs
	I1024 19:10:49.716993  519187 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-462645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-462645 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:10:49.717083  519187 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:10:49.721640  519187 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1024 19:10:49.721944  519187 start.go:159] libmachine.API.Create for "ingress-addon-legacy-462645" (driver="docker")
	I1024 19:10:49.721983  519187 client.go:168] LocalClient.Create starting
	I1024 19:10:49.722068  519187 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem
	I1024 19:10:49.722104  519187 main.go:141] libmachine: Decoding PEM data...
	I1024 19:10:49.722128  519187 main.go:141] libmachine: Parsing certificate...
	I1024 19:10:49.722180  519187 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem
	I1024 19:10:49.722200  519187 main.go:141] libmachine: Decoding PEM data...
	I1024 19:10:49.722211  519187 main.go:141] libmachine: Parsing certificate...
	I1024 19:10:49.722527  519187 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-462645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:10:49.739023  519187 cli_runner.go:211] docker network inspect ingress-addon-legacy-462645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:10:49.739094  519187 network_create.go:281] running [docker network inspect ingress-addon-legacy-462645] to gather additional debugging logs...
	I1024 19:10:49.739111  519187 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-462645
	W1024 19:10:49.755680  519187 cli_runner.go:211] docker network inspect ingress-addon-legacy-462645 returned with exit code 1
	I1024 19:10:49.755725  519187 network_create.go:284] error running [docker network inspect ingress-addon-legacy-462645]: docker network inspect ingress-addon-legacy-462645: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-462645 not found
	I1024 19:10:49.755744  519187 network_create.go:286] output of [docker network inspect ingress-addon-legacy-462645]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-462645 not found
	
	** /stderr **
	I1024 19:10:49.755875  519187 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:10:49.774108  519187 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00055bb90}
	I1024 19:10:49.774170  519187 network_create.go:124] attempt to create docker network ingress-addon-legacy-462645 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1024 19:10:49.774232  519187 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-462645 ingress-addon-legacy-462645
	I1024 19:10:49.847463  519187 network_create.go:108] docker network ingress-addon-legacy-462645 192.168.49.0/24 created
	I1024 19:10:49.847514  519187 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-462645" container
	I1024 19:10:49.847591  519187 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:10:49.866311  519187 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-462645 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-462645 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:10:49.887939  519187 oci.go:103] Successfully created a docker volume ingress-addon-legacy-462645
	I1024 19:10:49.888061  519187 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-462645-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-462645 --entrypoint /usr/bin/test -v ingress-addon-legacy-462645:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:10:51.690749  519187 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-462645-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-462645 --entrypoint /usr/bin/test -v ingress-addon-legacy-462645:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (1.802610221s)
	I1024 19:10:51.690787  519187 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-462645
	I1024 19:10:51.690811  519187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:10:51.690837  519187 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:10:51.690905  519187 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-462645:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:10:57.360470  519187 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-462645:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.669489714s)
	I1024 19:10:57.360506  519187 kic.go:200] duration metric: took 5.669665 seconds to extract preloaded images to volume
	W1024 19:10:57.360695  519187 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:10:57.360838  519187 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:10:57.416959  519187 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-462645 --name ingress-addon-legacy-462645 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-462645 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-462645 --network ingress-addon-legacy-462645 --ip 192.168.49.2 --volume ingress-addon-legacy-462645:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:10:57.782719  519187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-462645 --format={{.State.Running}}
	I1024 19:10:57.802716  519187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-462645 --format={{.State.Status}}
	I1024 19:10:57.826727  519187 cli_runner.go:164] Run: docker exec ingress-addon-legacy-462645 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:10:57.915401  519187 oci.go:144] the created container "ingress-addon-legacy-462645" has a running status.
	I1024 19:10:57.915473  519187 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa...
	I1024 19:10:58.039985  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1024 19:10:58.040042  519187 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:10:58.060731  519187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-462645 --format={{.State.Status}}
	I1024 19:10:58.080377  519187 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:10:58.080400  519187 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-462645 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:10:58.177543  519187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-462645 --format={{.State.Status}}
	I1024 19:10:58.199280  519187 machine.go:88] provisioning docker machine ...
	I1024 19:10:58.199330  519187 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-462645"
	I1024 19:10:58.199402  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:10:58.221086  519187 main.go:141] libmachine: Using SSH client type: native
	I1024 19:10:58.221466  519187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33210 <nil> <nil>}
	I1024 19:10:58.221484  519187 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-462645 && echo "ingress-addon-legacy-462645" | sudo tee /etc/hostname
	I1024 19:10:58.222131  519187 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47402->127.0.0.1:33210: read: connection reset by peer
	I1024 19:11:01.362523  519187 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-462645
	
	I1024 19:11:01.362617  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:01.383698  519187 main.go:141] libmachine: Using SSH client type: native
	I1024 19:11:01.384113  519187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33210 <nil> <nil>}
	I1024 19:11:01.384135  519187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-462645' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-462645/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-462645' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:11:01.514414  519187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:11:01.514470  519187 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-471553/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-471553/.minikube}
	I1024 19:11:01.514640  519187 ubuntu.go:177] setting up certificates
	I1024 19:11:01.514683  519187 provision.go:83] configureAuth start
	I1024 19:11:01.514781  519187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-462645
	I1024 19:11:01.534928  519187 provision.go:138] copyHostCerts
	I1024 19:11:01.534991  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:11:01.535028  519187 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem, removing ...
	I1024 19:11:01.535049  519187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:11:01.535128  519187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem (1082 bytes)
	I1024 19:11:01.535221  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:11:01.535247  519187 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem, removing ...
	I1024 19:11:01.535256  519187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:11:01.535290  519187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem (1123 bytes)
	I1024 19:11:01.535350  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:11:01.535373  519187 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem, removing ...
	I1024 19:11:01.535381  519187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:11:01.535411  519187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem (1675 bytes)
	I1024 19:11:01.535471  519187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-462645 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-462645]
	I1024 19:11:01.721050  519187 provision.go:172] copyRemoteCerts
	I1024 19:11:01.721114  519187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:11:01.721150  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:01.740328  519187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33210 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa Username:docker}
	I1024 19:11:01.829608  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:11:01.829685  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1024 19:11:01.857428  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:11:01.857494  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:11:01.879157  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:11:01.879220  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:11:01.901519  519187 provision.go:86] duration metric: configureAuth took 386.818704ms
	I1024 19:11:01.901559  519187 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:11:01.901723  519187 config.go:182] Loaded profile config "ingress-addon-legacy-462645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:11:01.901846  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:01.921402  519187 main.go:141] libmachine: Using SSH client type: native
	I1024 19:11:01.921911  519187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33210 <nil> <nil>}
	I1024 19:11:01.921937  519187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:11:02.181088  519187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:11:02.181122  519187 machine.go:91] provisioned docker machine in 3.981809956s
	I1024 19:11:02.181135  519187 client.go:171] LocalClient.Create took 12.459132707s
	I1024 19:11:02.181150  519187 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-462645" took 12.459204689s
	I1024 19:11:02.181160  519187 start.go:300] post-start starting for "ingress-addon-legacy-462645" (driver="docker")
	I1024 19:11:02.181173  519187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:11:02.181228  519187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:11:02.181275  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:02.198157  519187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33210 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa Username:docker}
	I1024 19:11:02.291652  519187 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:11:02.295664  519187 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:11:02.295699  519187 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:11:02.295707  519187 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:11:02.295715  519187 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:11:02.295727  519187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/addons for local assets ...
	I1024 19:11:02.295783  519187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/files for local assets ...
	I1024 19:11:02.295878  519187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> 4783232.pem in /etc/ssl/certs
	I1024 19:11:02.295892  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> /etc/ssl/certs/4783232.pem
	I1024 19:11:02.295991  519187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:11:02.305916  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:11:02.331805  519187 start.go:303] post-start completed in 150.628776ms
	I1024 19:11:02.332160  519187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-462645
	I1024 19:11:02.350506  519187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/config.json ...
	I1024 19:11:02.350774  519187 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:11:02.350815  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:02.371995  519187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33210 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa Username:docker}
	I1024 19:11:02.461659  519187 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:11:02.465948  519187 start.go:128] duration metric: createHost completed in 12.748846147s
	I1024 19:11:02.465975  519187 start.go:83] releasing machines lock for "ingress-addon-legacy-462645", held for 12.74899695s
	I1024 19:11:02.466042  519187 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-462645
	I1024 19:11:02.483138  519187 ssh_runner.go:195] Run: cat /version.json
	I1024 19:11:02.483204  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:02.483211  519187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:11:02.483277  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:02.501458  519187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33210 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa Username:docker}
	I1024 19:11:02.502455  519187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33210 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa Username:docker}
	I1024 19:11:02.685461  519187 ssh_runner.go:195] Run: systemctl --version
	I1024 19:11:02.690054  519187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:11:02.830140  519187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:11:02.835629  519187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:11:02.858764  519187 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:11:02.858866  519187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:11:02.892118  519187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:11:02.892145  519187 start.go:472] detecting cgroup driver to use...
	I1024 19:11:02.892189  519187 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:11:02.892241  519187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:11:02.908738  519187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:11:02.920990  519187 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:11:02.921139  519187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:11:02.935695  519187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:11:02.950996  519187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:11:03.035663  519187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:11:03.126908  519187 docker.go:214] disabling docker service ...
	I1024 19:11:03.126991  519187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:11:03.148125  519187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:11:03.160027  519187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:11:03.245093  519187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:11:03.333717  519187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:11:03.345517  519187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:11:03.362260  519187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 19:11:03.362322  519187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:11:03.372653  519187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:11:03.372723  519187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:11:03.386873  519187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:11:03.398182  519187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:11:03.411861  519187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:11:03.422504  519187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:11:03.431747  519187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:11:03.441321  519187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:11:03.523314  519187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:11:03.632861  519187 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:11:03.632976  519187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:11:03.636692  519187 start.go:540] Will wait 60s for crictl version
	I1024 19:11:03.636745  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:03.640000  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:11:03.678571  519187 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:11:03.678691  519187 ssh_runner.go:195] Run: crio --version
	I1024 19:11:03.723373  519187 ssh_runner.go:195] Run: crio --version
	I1024 19:11:03.769752  519187 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1024 19:11:03.772128  519187 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-462645 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:11:03.789989  519187 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1024 19:11:03.795653  519187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:11:03.810044  519187 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:11:03.810119  519187 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:11:03.863050  519187 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:11:03.863109  519187 ssh_runner.go:195] Run: which lz4
	I1024 19:11:03.866461  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1024 19:11:03.866561  519187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:11:03.870064  519187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:11:03.870102  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1024 19:11:05.087394  519187 crio.go:444] Took 1.220865 seconds to copy over tarball
	I1024 19:11:05.087464  519187 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:11:07.681023  519187 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.593517262s)
	I1024 19:11:07.681059  519187 crio.go:451] Took 2.593639 seconds to extract the tarball
	I1024 19:11:07.681068  519187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:11:07.752881  519187 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:11:07.786475  519187 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:11:07.786499  519187 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 19:11:07.786550  519187 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:11:07.786583  519187 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:11:07.786623  519187 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:11:07.786645  519187 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1024 19:11:07.786690  519187 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1024 19:11:07.786760  519187 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:11:07.786786  519187 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:11:07.786935  519187 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:11:07.788175  519187 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:11:07.788199  519187 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:11:07.788213  519187 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:11:07.788184  519187 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:11:07.788216  519187 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:11:07.788255  519187 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:11:07.788255  519187 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1024 19:11:07.788263  519187 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1024 19:11:07.976834  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1024 19:11:08.018188  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1024 19:11:08.018953  519187 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1024 19:11:08.018993  519187 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:11:08.019027  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:08.022043  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:11:08.029971  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:11:08.048132  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1024 19:11:08.063370  519187 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1024 19:11:08.063417  519187 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1024 19:11:08.063445  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1024 19:11:08.063470  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:08.066757  519187 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1024 19:11:08.066897  519187 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:11:08.067000  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:08.075579  519187 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1024 19:11:08.075631  519187 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:11:08.075692  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:08.088369  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:11:08.139676  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:11:08.144579  519187 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1024 19:11:08.144631  519187 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1024 19:11:08.144694  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:08.152191  519187 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:11:08.165196  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1024 19:11:08.165250  519187 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1024 19:11:08.165372  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:11:08.165481  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:11:08.262391  519187 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1024 19:11:08.262474  519187 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:11:08.262587  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:08.354067  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1024 19:11:08.354117  519187 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1024 19:11:08.354164  519187 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:11:08.354217  519187 ssh_runner.go:195] Run: which crictl
	I1024 19:11:08.354244  519187 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1024 19:11:08.354324  519187 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1024 19:11:08.354356  519187 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1024 19:11:08.354415  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:11:08.394426  519187 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:11:08.394453  519187 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1024 19:11:08.394475  519187 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1024 19:11:08.447535  519187 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1024 19:11:08.447605  519187 cache_images.go:92] LoadImages completed in 661.093049ms
	W1024 19:11:08.447696  519187 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1024 19:11:08.447766  519187 ssh_runner.go:195] Run: crio config
	I1024 19:11:08.498102  519187 cni.go:84] Creating CNI manager for ""
	I1024 19:11:08.498124  519187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:11:08.498142  519187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:11:08.498183  519187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-462645 NodeName:ingress-addon-legacy-462645 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 19:11:08.498338  519187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-462645"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:11:08.498430  519187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-462645 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-462645 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:11:08.498490  519187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1024 19:11:08.507075  519187 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:11:08.507165  519187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:11:08.515288  519187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1024 19:11:08.534287  519187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1024 19:11:08.553769  519187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1024 19:11:08.573751  519187 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:11:08.577839  519187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:11:08.588750  519187 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645 for IP: 192.168.49.2
	I1024 19:11:08.588835  519187 certs.go:190] acquiring lock for shared ca certs: {Name:mkd071e4924662af2a94ad3f2018330ff8506826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:08.588984  519187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key
	I1024 19:11:08.589028  519187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key
	I1024 19:11:08.589084  519187 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.key
	I1024 19:11:08.589113  519187 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt with IP's: []
	I1024 19:11:08.659306  519187 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt ...
	I1024 19:11:08.659340  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: {Name:mkf1b7f8b3c2258b51d35621bfd6dc3ab4748fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:08.659511  519187 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.key ...
	I1024 19:11:08.659521  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.key: {Name:mkc5b6efde0316473474a7906d8bb2e061401d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:08.659617  519187 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.key.dd3b5fb2
	I1024 19:11:08.659632  519187 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:11:08.787805  519187 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.crt.dd3b5fb2 ...
	I1024 19:11:08.787843  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.crt.dd3b5fb2: {Name:mk6a6587ba7a0537c19fce1ebd946cb9d0c19a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:08.788010  519187 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.key.dd3b5fb2 ...
	I1024 19:11:08.788021  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.key.dd3b5fb2: {Name:mk03a411aa3efb4893344dcfb41e4363bd140d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:08.788093  519187 certs.go:337] copying /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.crt
	I1024 19:11:08.788154  519187 certs.go:341] copying /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.key
	I1024 19:11:08.788203  519187 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.key
	I1024 19:11:08.788234  519187 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.crt with IP's: []
	I1024 19:11:09.029243  519187 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.crt ...
	I1024 19:11:09.029282  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.crt: {Name:mk9ca47c6cfab59ee6f0e862ef9ab613b437e664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:09.029455  519187 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.key ...
	I1024 19:11:09.029472  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.key: {Name:mkf6071958f4cb110f74bd12828f3ef99804f0c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:09.029556  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 19:11:09.029574  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 19:11:09.029584  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 19:11:09.029596  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 19:11:09.029607  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:11:09.029617  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:11:09.029634  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:11:09.029647  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:11:09.029701  519187 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem (1338 bytes)
	W1024 19:11:09.029741  519187 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323_empty.pem, impossibly tiny 0 bytes
	I1024 19:11:09.029759  519187 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:11:09.029783  519187 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:11:09.029812  519187 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:11:09.029837  519187 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem (1675 bytes)
	I1024 19:11:09.029878  519187 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:11:09.029903  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem -> /usr/share/ca-certificates/478323.pem
	I1024 19:11:09.029916  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> /usr/share/ca-certificates/4783232.pem
	I1024 19:11:09.029931  519187 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:11:09.030552  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:11:09.056716  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:11:09.081976  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:11:09.107514  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:11:09.134366  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:11:09.160410  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1024 19:11:09.184560  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:11:09.208575  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:11:09.230746  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem --> /usr/share/ca-certificates/478323.pem (1338 bytes)
	I1024 19:11:09.254843  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /usr/share/ca-certificates/4783232.pem (1708 bytes)
	I1024 19:11:09.282887  519187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:11:09.310620  519187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:11:09.329213  519187 ssh_runner.go:195] Run: openssl version
	I1024 19:11:09.335487  519187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478323.pem && ln -fs /usr/share/ca-certificates/478323.pem /etc/ssl/certs/478323.pem"
	I1024 19:11:09.345794  519187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478323.pem
	I1024 19:11:09.350280  519187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:07 /usr/share/ca-certificates/478323.pem
	I1024 19:11:09.350341  519187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478323.pem
	I1024 19:11:09.357469  519187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478323.pem /etc/ssl/certs/51391683.0"
	I1024 19:11:09.367710  519187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783232.pem && ln -fs /usr/share/ca-certificates/4783232.pem /etc/ssl/certs/4783232.pem"
	I1024 19:11:09.376986  519187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783232.pem
	I1024 19:11:09.380914  519187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:07 /usr/share/ca-certificates/4783232.pem
	I1024 19:11:09.380982  519187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783232.pem
	I1024 19:11:09.387752  519187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783232.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:11:09.397519  519187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:11:09.407735  519187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:11:09.412497  519187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:11:09.412563  519187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:11:09.420100  519187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:11:09.432309  519187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:11:09.436617  519187 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:11:09.436716  519187 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-462645 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-462645 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:11:09.436901  519187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:11:09.436981  519187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:11:09.475475  519187 cri.go:89] found id: ""
	I1024 19:11:09.475559  519187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:11:09.484967  519187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:11:09.493576  519187 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 19:11:09.493653  519187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:11:09.501825  519187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:11:09.504081  519187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 19:11:09.551975  519187 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1024 19:11:09.552048  519187 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:11:09.597829  519187 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:11:09.597931  519187 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1024 19:11:09.597997  519187 kubeadm.go:322] OS: Linux
	I1024 19:11:09.598184  519187 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 19:11:09.598701  519187 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 19:11:09.598759  519187 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 19:11:09.598815  519187 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 19:11:09.598908  519187 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 19:11:09.598997  519187 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 19:11:09.679726  519187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:11:09.679836  519187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:11:09.679936  519187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:11:09.899315  519187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:11:09.900366  519187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:11:09.900466  519187 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:11:09.989214  519187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:11:09.992298  519187 out.go:204]   - Generating certificates and keys ...
	I1024 19:11:09.992486  519187 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:11:09.992590  519187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:11:10.118621  519187 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:11:10.209295  519187 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:11:10.310412  519187 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:11:10.785326  519187 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:11:10.919830  519187 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:11:10.919993  519187 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-462645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:11:11.009419  519187 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:11:11.009571  519187 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-462645 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:11:11.136233  519187 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:11:11.443589  519187 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:11:11.627184  519187 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:11:11.627324  519187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:11:11.768653  519187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:11:11.907400  519187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:11:12.290581  519187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:11:12.502350  519187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:11:12.503142  519187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:11:12.507722  519187 out.go:204]   - Booting up control plane ...
	I1024 19:11:12.507837  519187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:11:12.510606  519187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:11:12.511567  519187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:11:12.512526  519187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:11:12.515048  519187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:11:19.018765  519187 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.503769 seconds
	I1024 19:11:19.018950  519187 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:11:19.032998  519187 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:11:19.553846  519187 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:11:19.554049  519187 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-462645 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 19:11:20.065590  519187 kubeadm.go:322] [bootstrap-token] Using token: v6spcg.9krbad4f4spr5mzi
	I1024 19:11:20.067861  519187 out.go:204]   - Configuring RBAC rules ...
	I1024 19:11:20.068033  519187 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:11:20.072738  519187 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:11:20.080253  519187 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:11:20.082693  519187 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:11:20.084888  519187 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:11:20.087139  519187 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:11:20.099791  519187 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:11:20.353159  519187 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:11:20.479192  519187 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:11:20.480239  519187 kubeadm.go:322] 
	I1024 19:11:20.480332  519187 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:11:20.480371  519187 kubeadm.go:322] 
	I1024 19:11:20.480453  519187 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:11:20.480461  519187 kubeadm.go:322] 
	I1024 19:11:20.480482  519187 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:11:20.480529  519187 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:11:20.480577  519187 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:11:20.480584  519187 kubeadm.go:322] 
	I1024 19:11:20.480625  519187 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:11:20.480690  519187 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:11:20.480762  519187 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:11:20.480768  519187 kubeadm.go:322] 
	I1024 19:11:20.480862  519187 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:11:20.480991  519187 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:11:20.481031  519187 kubeadm.go:322] 
	I1024 19:11:20.481162  519187 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token v6spcg.9krbad4f4spr5mzi \
	I1024 19:11:20.481323  519187 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 \
	I1024 19:11:20.481349  519187 kubeadm.go:322]     --control-plane 
	I1024 19:11:20.481392  519187 kubeadm.go:322] 
	I1024 19:11:20.481489  519187 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:11:20.481506  519187 kubeadm.go:322] 
	I1024 19:11:20.481582  519187 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token v6spcg.9krbad4f4spr5mzi \
	I1024 19:11:20.481707  519187 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 
	I1024 19:11:20.484033  519187 kubeadm.go:322] W1024 19:11:09.551213    1387 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1024 19:11:20.484290  519187 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1024 19:11:20.484404  519187 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:11:20.484529  519187 kubeadm.go:322] W1024 19:11:12.510020    1387 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:11:20.484653  519187 kubeadm.go:322] W1024 19:11:12.511179    1387 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:11:20.484676  519187 cni.go:84] Creating CNI manager for ""
	I1024 19:11:20.484686  519187 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:11:20.489046  519187 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:11:20.491125  519187 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:11:20.495238  519187 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1024 19:11:20.495259  519187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:11:20.513052  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:11:21.057054  519187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:11:21.057159  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=ingress-addon-legacy-462645 minikube.k8s.io/updated_at=2023_10_24T19_11_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:21.057162  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:21.065247  519187 ops.go:34] apiserver oom_adj: -16
	I1024 19:11:21.183541  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:21.354843  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:22.043211  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:22.542624  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:23.042559  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:23.543195  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:24.043322  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:24.542494  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:25.043568  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:25.543159  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:26.043553  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:26.543349  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:27.043249  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:27.542835  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:28.042980  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:28.542729  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:29.043304  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:29.543369  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:30.042768  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:30.542735  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:31.043195  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:31.543394  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:32.043228  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:32.543139  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:33.043164  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:33.543425  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:34.042540  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:34.543011  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:35.043390  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:35.543146  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:36.043231  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:36.543362  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:37.042681  519187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:11:37.144531  519187 kubeadm.go:1081] duration metric: took 16.08744764s to wait for elevateKubeSystemPrivileges.
	I1024 19:11:37.144578  519187 kubeadm.go:406] StartCluster complete in 27.707889869s
	I1024 19:11:37.144601  519187 settings.go:142] acquiring lock: {Name:mk9f191a52d3ce53608a65d0f0798312edc39465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:37.144666  519187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:11:37.145533  519187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/kubeconfig: {Name:mkcf54ea0dedcb61df1368dce9070a6aebbbad94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:11:37.145781  519187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:11:37.145878  519187 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:11:37.145968  519187 config.go:182] Loaded profile config "ingress-addon-legacy-462645": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:11:37.145975  519187 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-462645"
	I1024 19:11:37.146017  519187 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-462645"
	I1024 19:11:37.146022  519187 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-462645"
	I1024 19:11:37.146048  519187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-462645"
	I1024 19:11:37.146087  519187 host.go:66] Checking if "ingress-addon-legacy-462645" exists ...
	I1024 19:11:37.146473  519187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-462645 --format={{.State.Status}}
	I1024 19:11:37.146435  519187 kapi.go:59] client config for ingress-addon-legacy-462645: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:11:37.146655  519187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-462645 --format={{.State.Status}}
	I1024 19:11:37.147258  519187 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:11:37.165577  519187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-462645" context rescaled to 1 replicas
	I1024 19:11:37.165631  519187 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:11:37.168309  519187 out.go:177] * Verifying Kubernetes components...
	I1024 19:11:37.171131  519187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:11:37.172650  519187 kapi.go:59] client config for ingress-addon-legacy-462645: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:11:37.173127  519187 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-462645"
	I1024 19:11:37.173177  519187 host.go:66] Checking if "ingress-addon-legacy-462645" exists ...
	I1024 19:11:37.173765  519187 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-462645 --format={{.State.Status}}
	I1024 19:11:37.178647  519187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:11:37.181884  519187 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:11:37.181909  519187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:11:37.181986  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:37.203787  519187 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:11:37.203809  519187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:11:37.203868  519187 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-462645
	I1024 19:11:37.207879  519187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33210 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa Username:docker}
	I1024 19:11:37.226322  519187 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33210 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/ingress-addon-legacy-462645/id_rsa Username:docker}
	I1024 19:11:37.367463  519187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:11:37.368290  519187 kapi.go:59] client config for ingress-addon-legacy-462645: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:11:37.368626  519187 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-462645" to be "Ready" ...
	I1024 19:11:37.566849  519187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:11:37.642691  519187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:11:37.957403  519187 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1024 19:11:38.178971  519187 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1024 19:11:38.181143  519187 addons.go:502] enable addons completed in 1.035235048s: enabled=[default-storageclass storage-provisioner]
	I1024 19:11:39.383916  519187 node_ready.go:58] node "ingress-addon-legacy-462645" has status "Ready":"False"
	I1024 19:11:41.882482  519187 node_ready.go:58] node "ingress-addon-legacy-462645" has status "Ready":"False"
	I1024 19:11:44.383431  519187 node_ready.go:58] node "ingress-addon-legacy-462645" has status "Ready":"False"
	I1024 19:11:46.880994  519187 node_ready.go:58] node "ingress-addon-legacy-462645" has status "Ready":"False"
	I1024 19:11:48.881559  519187 node_ready.go:58] node "ingress-addon-legacy-462645" has status "Ready":"False"
	I1024 19:11:51.381675  519187 node_ready.go:49] node "ingress-addon-legacy-462645" has status "Ready":"True"
	I1024 19:11:51.381700  519187 node_ready.go:38] duration metric: took 14.013037517s waiting for node "ingress-addon-legacy-462645" to be "Ready" ...
	I1024 19:11:51.381711  519187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:11:51.388829  519187 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-6sbb8" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:53.398021  519187 pod_ready.go:102] pod "coredns-66bff467f8-6sbb8" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-24 19:11:36 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1024 19:11:55.900473  519187 pod_ready.go:102] pod "coredns-66bff467f8-6sbb8" in "kube-system" namespace has status "Ready":"False"
	I1024 19:11:58.402682  519187 pod_ready.go:92] pod "coredns-66bff467f8-6sbb8" in "kube-system" namespace has status "Ready":"True"
	I1024 19:11:58.402729  519187 pod_ready.go:81] duration metric: took 7.013872632s waiting for pod "coredns-66bff467f8-6sbb8" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.402744  519187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.409439  519187 pod_ready.go:92] pod "etcd-ingress-addon-legacy-462645" in "kube-system" namespace has status "Ready":"True"
	I1024 19:11:58.409470  519187 pod_ready.go:81] duration metric: took 6.714795ms waiting for pod "etcd-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.409487  519187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.414261  519187 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-462645" in "kube-system" namespace has status "Ready":"True"
	I1024 19:11:58.414287  519187 pod_ready.go:81] duration metric: took 4.792903ms waiting for pod "kube-apiserver-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.414299  519187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.419507  519187 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-462645" in "kube-system" namespace has status "Ready":"True"
	I1024 19:11:58.419535  519187 pod_ready.go:81] duration metric: took 5.227763ms waiting for pod "kube-controller-manager-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.419558  519187 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p67m6" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.425362  519187 pod_ready.go:92] pod "kube-proxy-p67m6" in "kube-system" namespace has status "Ready":"True"
	I1024 19:11:58.425401  519187 pod_ready.go:81] duration metric: took 5.83278ms waiting for pod "kube-proxy-p67m6" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.425418  519187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.596013  519187 request.go:629] Waited for 170.478128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-462645
	I1024 19:11:58.796502  519187 request.go:629] Waited for 196.462886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-462645
	I1024 19:11:58.799455  519187 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-462645" in "kube-system" namespace has status "Ready":"True"
	I1024 19:11:58.799488  519187 pod_ready.go:81] duration metric: took 374.058532ms waiting for pod "kube-scheduler-ingress-addon-legacy-462645" in "kube-system" namespace to be "Ready" ...
	I1024 19:11:58.799514  519187 pod_ready.go:38] duration metric: took 7.417786016s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:11:58.799535  519187 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:11:58.799603  519187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:11:58.811446  519187 api_server.go:72] duration metric: took 21.645753722s to wait for apiserver process to appear ...
	I1024 19:11:58.811473  519187 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:11:58.811502  519187 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1024 19:11:58.816806  519187 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1024 19:11:58.817738  519187 api_server.go:141] control plane version: v1.18.20
	I1024 19:11:58.817767  519187 api_server.go:131] duration metric: took 6.284083ms to wait for apiserver health ...
	I1024 19:11:58.817777  519187 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:11:58.996292  519187 request.go:629] Waited for 178.411004ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:11:59.003100  519187 system_pods.go:59] 8 kube-system pods found
	I1024 19:11:59.003179  519187 system_pods.go:61] "coredns-66bff467f8-6sbb8" [43516a06-a0c9-4122-be5e-3e2b04d563fc] Running
	I1024 19:11:59.003195  519187 system_pods.go:61] "etcd-ingress-addon-legacy-462645" [9af47b29-ac0d-4a34-b17c-3cadc008660f] Running
	I1024 19:11:59.003202  519187 system_pods.go:61] "kindnet-fmxm9" [9cbca5c5-42ed-4e97-aac3-c00daecbefa1] Running
	I1024 19:11:59.003220  519187 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-462645" [377093d1-9b35-40c0-82cd-caa9780ebdea] Running
	I1024 19:11:59.003231  519187 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-462645" [dc66da1c-434f-42dc-8976-38d690187f9c] Running
	I1024 19:11:59.003239  519187 system_pods.go:61] "kube-proxy-p67m6" [2070ad24-7067-49d8-a623-ad5bc1e23180] Running
	I1024 19:11:59.003245  519187 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-462645" [1885068c-3a87-4d0a-9c67-cc30d734f621] Running
	I1024 19:11:59.003257  519187 system_pods.go:61] "storage-provisioner" [762d7c20-9371-406c-884e-0e2401b336e1] Running
	I1024 19:11:59.003268  519187 system_pods.go:74] duration metric: took 185.482018ms to wait for pod list to return data ...
	I1024 19:11:59.003302  519187 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:11:59.195880  519187 request.go:629] Waited for 192.468456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:11:59.198525  519187 default_sa.go:45] found service account: "default"
	I1024 19:11:59.198557  519187 default_sa.go:55] duration metric: took 195.246585ms for default service account to be created ...
	I1024 19:11:59.198571  519187 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:11:59.396151  519187 request.go:629] Waited for 197.450934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:11:59.402974  519187 system_pods.go:86] 8 kube-system pods found
	I1024 19:11:59.403018  519187 system_pods.go:89] "coredns-66bff467f8-6sbb8" [43516a06-a0c9-4122-be5e-3e2b04d563fc] Running
	I1024 19:11:59.403027  519187 system_pods.go:89] "etcd-ingress-addon-legacy-462645" [9af47b29-ac0d-4a34-b17c-3cadc008660f] Running
	I1024 19:11:59.403034  519187 system_pods.go:89] "kindnet-fmxm9" [9cbca5c5-42ed-4e97-aac3-c00daecbefa1] Running
	I1024 19:11:59.403047  519187 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-462645" [377093d1-9b35-40c0-82cd-caa9780ebdea] Running
	I1024 19:11:59.403054  519187 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-462645" [dc66da1c-434f-42dc-8976-38d690187f9c] Running
	I1024 19:11:59.403059  519187 system_pods.go:89] "kube-proxy-p67m6" [2070ad24-7067-49d8-a623-ad5bc1e23180] Running
	I1024 19:11:59.403067  519187 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-462645" [1885068c-3a87-4d0a-9c67-cc30d734f621] Running
	I1024 19:11:59.403073  519187 system_pods.go:89] "storage-provisioner" [762d7c20-9371-406c-884e-0e2401b336e1] Running
	I1024 19:11:59.403091  519187 system_pods.go:126] duration metric: took 204.506737ms to wait for k8s-apps to be running ...
	I1024 19:11:59.403107  519187 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:11:59.403187  519187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:11:59.420203  519187 system_svc.go:56] duration metric: took 17.05761ms WaitForService to wait for kubelet.
	I1024 19:11:59.420258  519187 kubeadm.go:581] duration metric: took 22.254571647s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:11:59.420288  519187 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:11:59.596479  519187 request.go:629] Waited for 176.050816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1024 19:11:59.599245  519187 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:11:59.599283  519187 node_conditions.go:123] node cpu capacity is 8
	I1024 19:11:59.599297  519187 node_conditions.go:105] duration metric: took 179.002288ms to run NodePressure ...
	I1024 19:11:59.599311  519187 start.go:228] waiting for startup goroutines ...
	I1024 19:11:59.599320  519187 start.go:233] waiting for cluster config update ...
	I1024 19:11:59.599340  519187 start.go:242] writing updated cluster config ...
	I1024 19:11:59.599623  519187 ssh_runner.go:195] Run: rm -f paused
	I1024 19:11:59.649647  519187 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1024 19:11:59.651776  519187 out.go:177] 
	W1024 19:11:59.653685  519187 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1024 19:11:59.655201  519187 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1024 19:11:59.656942  519187 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-462645" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:14:52 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:14:52.324562837Z" level=info msg="Started container" PID=4924 containerID=6643de4154861685f8e88bed42ee0c8b457f187a2b79ff8373820b6b6a9744dd description=default/hello-world-app-5f5d8b66bb-6nb7v/hello-world-app id=ed69e554-bb93-4f97-8945-4fab2dff1d35 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=49345a224edd8420c894183c1914d8792ca0897939a2cbf465d8bc6e8c4b58f6
	Oct 24 19:14:56 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:14:56.761896903Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=69eacf48-4969-45a5-b126-61cf2bf953ac name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:15:06 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:06.763132097Z" level=info msg="Stopping pod sandbox: 32f78442ea82d146cfe82e729cd0cc502c098620f50bcbc980850af781d10a7c" id=ec7a6f07-95ad-4467-9dca-4bb0d3ea9cf4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:06 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:06.764879484Z" level=info msg="Stopped pod sandbox: 32f78442ea82d146cfe82e729cd0cc502c098620f50bcbc980850af781d10a7c" id=ec7a6f07-95ad-4467-9dca-4bb0d3ea9cf4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:07 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:07.395172472Z" level=info msg="Stopping pod sandbox: 32f78442ea82d146cfe82e729cd0cc502c098620f50bcbc980850af781d10a7c" id=d0dbb02b-7eae-4ba8-b2b7-502942b00b39 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:07 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:07.395246120Z" level=info msg="Stopped pod sandbox (already stopped): 32f78442ea82d146cfe82e729cd0cc502c098620f50bcbc980850af781d10a7c" id=d0dbb02b-7eae-4ba8-b2b7-502942b00b39 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:08 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:08.261795735Z" level=info msg="Stopping container: e5f8e075a9f71925434bcd1548f76b8269aa80cf76d001b6c76127a086949a2f (timeout: 2s)" id=2db1fb50-d56f-475e-8a1c-00170c69e631 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 24 19:15:08 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:08.264418144Z" level=info msg="Stopping container: e5f8e075a9f71925434bcd1548f76b8269aa80cf76d001b6c76127a086949a2f (timeout: 2s)" id=d159b419-e2f3-4733-a0ab-dbae89e099c5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 24 19:15:08 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:08.761603793Z" level=info msg="Stopping pod sandbox: 32f78442ea82d146cfe82e729cd0cc502c098620f50bcbc980850af781d10a7c" id=1bb5d426-2a59-4139-81f5-c04141c6b7d2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:08 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:08.761662647Z" level=info msg="Stopped pod sandbox (already stopped): 32f78442ea82d146cfe82e729cd0cc502c098620f50bcbc980850af781d10a7c" id=1bb5d426-2a59-4139-81f5-c04141c6b7d2 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.272607681Z" level=warning msg="Stopping container e5f8e075a9f71925434bcd1548f76b8269aa80cf76d001b6c76127a086949a2f with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=2db1fb50-d56f-475e-8a1c-00170c69e631 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 24 19:15:10 ingress-addon-legacy-462645 conmon[3458]: conmon e5f8e075a9f71925434b <ninfo>: container 3470 exited with status 137
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.455448439Z" level=info msg="Stopped container e5f8e075a9f71925434bcd1548f76b8269aa80cf76d001b6c76127a086949a2f: ingress-nginx/ingress-nginx-controller-7fcf777cb7-52jf9/controller" id=d159b419-e2f3-4733-a0ab-dbae89e099c5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.455486968Z" level=info msg="Stopped container e5f8e075a9f71925434bcd1548f76b8269aa80cf76d001b6c76127a086949a2f: ingress-nginx/ingress-nginx-controller-7fcf777cb7-52jf9/controller" id=2db1fb50-d56f-475e-8a1c-00170c69e631 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.456295525Z" level=info msg="Stopping pod sandbox: 18c2e0e94d7fa1f84c9a830feca0cc3a90e6b3c330a5401707b830855f859fc5" id=a540fb41-d0c7-44e5-a902-0b03eedbc446 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.456305905Z" level=info msg="Stopping pod sandbox: 18c2e0e94d7fa1f84c9a830feca0cc3a90e6b3c330a5401707b830855f859fc5" id=0d739d87-69c0-441b-b8c6-1b0107219b23 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.460860860Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-7DXY43PQGQA4HLPP - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-EIACTU7I3UO6WEBQ - [0:0]\n-X KUBE-HP-EIACTU7I3UO6WEBQ\n-X KUBE-HP-7DXY43PQGQA4HLPP\nCOMMIT\n"
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.462753796Z" level=info msg="Closing host port tcp:80"
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.462814749Z" level=info msg="Closing host port tcp:443"
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.464321226Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.464450989Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.464665871Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-52jf9 Namespace:ingress-nginx ID:18c2e0e94d7fa1f84c9a830feca0cc3a90e6b3c330a5401707b830855f859fc5 UID:a7c868dd-9354-48a6-8531-87c804f3b41a NetNS:/var/run/netns/b821636e-510a-4581-a2a8-ec8b464b8e38 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.464895863Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-52jf9 from CNI network \"kindnet\" (type=ptp)"
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.504595789Z" level=info msg="Stopped pod sandbox: 18c2e0e94d7fa1f84c9a830feca0cc3a90e6b3c330a5401707b830855f859fc5" id=a540fb41-d0c7-44e5-a902-0b03eedbc446 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 24 19:15:10 ingress-addon-legacy-462645 crio[963]: time="2023-10-24 19:15:10.504862380Z" level=info msg="Stopped pod sandbox (already stopped): 18c2e0e94d7fa1f84c9a830feca0cc3a90e6b3c330a5401707b830855f859fc5" id=0d739d87-69c0-441b-b8c6-1b0107219b23 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6643de4154861       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            23 seconds ago      Running             hello-world-app           0                   49345a224edd8       hello-world-app-5f5d8b66bb-6nb7v
	47c7921f395dc       docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf                    2 minutes ago       Running             nginx                     0                   9d369dee1e4ad       nginx
	e5f8e075a9f71       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   18c2e0e94d7fa       ingress-nginx-controller-7fcf777cb7-52jf9
	a480a05af2773       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   1b3d268705593       ingress-nginx-admission-patch-lb5xs
	7c697fd37bf0c       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   ad59320bd4807       ingress-nginx-admission-create-9b7df
	97295f0dbb5dd       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   1437bf37c8947       coredns-66bff467f8-6sbb8
	9acb836fdcdb5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   1faa84f3a872b       storage-provisioner
	0fc13dde3ca47       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   c5e0b465b084c       kindnet-fmxm9
	0dcee8bf939e9       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   85626ac7d616d       kube-proxy-p67m6
	6319ec2f82116       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   7fbbe6a8a2a22       kube-scheduler-ingress-addon-legacy-462645
	52a59c733cc24       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   3e56b0161cf96       kube-controller-manager-ingress-addon-legacy-462645
	5cbd43ceaabb0       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   c719c4557fc83       etcd-ingress-addon-legacy-462645
	6511a94a03cdb       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   e03246458269f       kube-apiserver-ingress-addon-legacy-462645
	
	* 
	* ==> coredns [97295f0dbb5ddd62de7452cbd4752e4dcd3899e97f0526fe49d426de36b2f748] <==
	* [INFO] 10.244.0.5:48320 - 57549 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007811477s
	[INFO] 10.244.0.5:48320 - 34430 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005155629s
	[INFO] 10.244.0.5:53654 - 11165 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005092737s
	[INFO] 10.244.0.5:36398 - 2846 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005759257s
	[INFO] 10.244.0.5:39660 - 56317 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006086436s
	[INFO] 10.244.0.5:56680 - 45888 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006043277s
	[INFO] 10.244.0.5:45437 - 10229 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005845883s
	[INFO] 10.244.0.5:59822 - 19781 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005945632s
	[INFO] 10.244.0.5:44893 - 55678 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006072065s
	[INFO] 10.244.0.5:39660 - 14029 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007008539s
	[INFO] 10.244.0.5:53654 - 5435 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00743962s
	[INFO] 10.244.0.5:48320 - 3802 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007693046s
	[INFO] 10.244.0.5:45437 - 36771 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.0072926s
	[INFO] 10.244.0.5:39660 - 51680 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063578s
	[INFO] 10.244.0.5:36398 - 9820 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007328245s
	[INFO] 10.244.0.5:45437 - 14860 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053028s
	[INFO] 10.244.0.5:59822 - 59165 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007430047s
	[INFO] 10.244.0.5:53654 - 4745 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000297759s
	[INFO] 10.244.0.5:56680 - 1967 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007492152s
	[INFO] 10.244.0.5:48320 - 11424 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000247801s
	[INFO] 10.244.0.5:44893 - 52122 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00761892s
	[INFO] 10.244.0.5:36398 - 1953 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000135052s
	[INFO] 10.244.0.5:56680 - 30293 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073381s
	[INFO] 10.244.0.5:44893 - 25846 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000094794s
	[INFO] 10.244.0.5:59822 - 45946 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126776s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-462645
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-462645
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=ingress-addon-legacy-462645
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_11_21_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:11:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-462645
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:15:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:12:50 +0000   Tue, 24 Oct 2023 19:11:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:12:50 +0000   Tue, 24 Oct 2023 19:11:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:12:50 +0000   Tue, 24 Oct 2023 19:11:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:12:50 +0000   Tue, 24 Oct 2023 19:11:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-462645
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	System Info:
	  Machine ID:                 3163db3a6b8543b4b188e3d77ba34888
	  System UUID:                38dbb9b0-d6bc-4479-a450-f07134511b6c
	  Boot ID:                    f78507ce-bb13-4a64-bee1-5d653b27f216
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-6nb7v                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 coredns-66bff467f8-6sbb8                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m40s
	  kube-system                 etcd-ingress-addon-legacy-462645                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kindnet-fmxm9                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m40s
	  kube-system                 kube-apiserver-ingress-addon-legacy-462645             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-462645    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-p67m6                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-scheduler-ingress-addon-legacy-462645             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m56s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s  kubelet     Node ingress-addon-legacy-462645 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s  kubelet     Node ingress-addon-legacy-462645 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s  kubelet     Node ingress-addon-legacy-462645 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m39s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m26s  kubelet     Node ingress-addon-legacy-462645 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.008410] FS-Cache: O-key=[8] 'dba20f0200000000'
	[  +0.004967] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.006633] FS-Cache: N-cookie d=00000000758e7ab6{9p.inode} n=000000005cf6e31b
	[  +0.008764] FS-Cache: N-key=[8] 'dba20f0200000000'
	[  +0.357518] FS-Cache: Duplicate cookie detected
	[  +0.004696] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.006742] FS-Cache: O-cookie d=00000000758e7ab6{9p.inode} n=00000000d264f8e9
	[  +0.007356] FS-Cache: O-key=[8] 'e2a20f0200000000'
	[  +0.004932] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.006567] FS-Cache: N-cookie d=00000000758e7ab6{9p.inode} n=000000001cfa9689
	[  +0.007381] FS-Cache: N-key=[8] 'e2a20f0200000000'
	[Oct24 19:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +1.019070] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +2.015758] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +4.255535] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +8.195184] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[Oct24 19:13] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[ +32.764787] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	
	* 
	* ==> etcd [5cbd43ceaabb09c769a991fba2155a00fc09f89d3549a5f58088890394d7173c] <==
	* raft2023/10/24 19:11:13 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/24 19:11:13 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/24 19:11:13 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/24 19:11:13 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-24 19:11:13.544933 W | auth: simple token is not cryptographically signed
	2023-10-24 19:11:13.551134 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-24 19:11:13.552054 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/24 19:11:13 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-24 19:11:13.552526 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-24 19:11:13.554473 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-24 19:11:13.555375 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-24 19:11:13.555536 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/10/24 19:11:13 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/24 19:11:13 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/24 19:11:13 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/24 19:11:13 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/24 19:11:13 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-24 19:11:13.672723 I | etcdserver: published {Name:ingress-addon-legacy-462645 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-24 19:11:13.672835 I | embed: ready to serve client requests
	2023-10-24 19:11:13.672961 I | embed: ready to serve client requests
	2023-10-24 19:11:13.673032 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-24 19:11:13.675059 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-24 19:11:13.675228 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-24 19:11:13.675353 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-24 19:11:13.676286 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  19:15:16 up  2:57,  0 users,  load average: 0.29, 0.95, 1.00
	Linux ingress-addon-legacy-462645 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [0fc13dde3ca4781412e51f539e7c8f34119a5322d7d2f96003935795f4dbdbfb] <==
	* I1024 19:13:13.119164       1 main.go:227] handling current node
	I1024 19:13:23.122387       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:13:23.122412       1 main.go:227] handling current node
	I1024 19:13:33.126774       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:13:33.126802       1 main.go:227] handling current node
	I1024 19:13:43.136640       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:13:43.136677       1 main.go:227] handling current node
	I1024 19:13:53.140529       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:13:53.140561       1 main.go:227] handling current node
	I1024 19:14:03.154844       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:14:03.154875       1 main.go:227] handling current node
	I1024 19:14:13.162305       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:14:13.162342       1 main.go:227] handling current node
	I1024 19:14:23.175580       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:14:23.175612       1 main.go:227] handling current node
	I1024 19:14:33.179864       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:14:33.179893       1 main.go:227] handling current node
	I1024 19:14:43.190122       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:14:43.190151       1 main.go:227] handling current node
	I1024 19:14:53.194999       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:14:53.195104       1 main.go:227] handling current node
	I1024 19:15:03.204533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:15:03.204565       1 main.go:227] handling current node
	I1024 19:15:13.208306       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:15:13.208332       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6511a94a03cdb1020aff59c23683b973f0b8150a6fbf80ef7208e8562004f13e] <==
	* E1024 19:11:17.460272       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1024 19:11:17.558016       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:11:17.559483       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1024 19:11:17.560169       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1024 19:11:17.560185       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:11:17.560199       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:11:18.456792       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1024 19:11:18.457134       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1024 19:11:18.463252       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1024 19:11:18.466388       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1024 19:11:18.466407       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1024 19:11:18.887675       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:11:18.937760       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1024 19:11:19.066900       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1024 19:11:19.067981       1 controller.go:609] quota admission added evaluator for: endpoints
	I1024 19:11:19.072272       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 19:11:19.814265       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1024 19:11:20.341279       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1024 19:11:20.468975       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1024 19:11:20.695365       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:11:36.824965       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1024 19:11:36.867635       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1024 19:12:00.478332       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1024 19:12:28.309255       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1024 19:15:07.394543       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0xc006ff42d8), encoder:(*versioning.codec)(0xc00ad0ef00), buf:(*bytes.Buffer)(0xc00ddf2270)})
	
	* 
	* ==> kube-controller-manager [52a59c733cc2411cab62b39c9605db9c185c6a50386049221ea35bb0061a644c] <==
	* W1024 19:11:37.063541       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-462645. Assuming now as a timestamp.
	I1024 19:11:37.063524       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-462645", UID:"4d4739fe-1a49-40bf-9b9e-b0fd1407f762", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-462645 event: Registered Node ingress-addon-legacy-462645 in Controller
	I1024 19:11:37.063627       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1024 19:11:37.168843       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"5c3395f8-a484-4662-b13c-3909ab694f1a", APIVersion:"apps/v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1024 19:11:37.189923       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0a1f5948-9856-49fe-9d04-0638ff853a1f", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-xgvnq
	I1024 19:11:37.240990       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1024 19:11:37.262888       1 shared_informer.go:230] Caches are synced for endpoint 
	I1024 19:11:37.269184       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1024 19:11:37.341152       1 shared_informer.go:230] Caches are synced for job 
	I1024 19:11:37.465331       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:11:37.541099       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:11:37.541145       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1024 19:11:37.541885       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:11:37.841067       1 request.go:621] Throttling request took 1.037527s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	I1024 19:11:38.293609       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1024 19:11:38.293665       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:11:52.064884       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1024 19:12:00.469498       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f719826a-06bf-4a54-8e78-8037293434f4", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1024 19:12:00.478047       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"5fdb1ee1-3ee3-4c76-b8c8-f17a81099af1", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-52jf9
	I1024 19:12:00.556830       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4f744d68-39d0-436a-877f-8aad6c9590a5", APIVersion:"batch/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-9b7df
	I1024 19:12:00.563774       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"eea2731c-4b23-4f45-8881-d2edfd2549e6", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-lb5xs
	I1024 19:12:03.965211       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"4f744d68-39d0-436a-877f-8aad6c9590a5", APIVersion:"batch/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1024 19:12:04.963366       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"eea2731c-4b23-4f45-8881-d2edfd2549e6", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1024 19:14:49.907349       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b68fa725-0c5a-4aaa-91d2-f28776a4f2e0", APIVersion:"apps/v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1024 19:14:49.911897       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"629f36f4-e362-43e9-ba47-097aa2639013", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-6nb7v
	
	* 
	* ==> kube-proxy [0dcee8bf939e95f08fa80bf88235c07ba1163c659d888dfa0f4995734d813d4e] <==
	* W1024 19:11:37.770183       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1024 19:11:37.845612       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1024 19:11:37.845749       1 server_others.go:186] Using iptables Proxier.
	I1024 19:11:37.846515       1 server.go:583] Version: v1.18.20
	I1024 19:11:37.847702       1 config.go:315] Starting service config controller
	I1024 19:11:37.847746       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1024 19:11:37.848563       1 config.go:133] Starting endpoints config controller
	I1024 19:11:37.848596       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1024 19:11:37.948227       1 shared_informer.go:230] Caches are synced for service config 
	I1024 19:11:37.948834       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [6319ec2f8211627b56a2a8d9a62b933443851fb9ca66fb666aeb31ea48fd699e] <==
	* W1024 19:11:17.557304       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 19:11:17.646513       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:11:17.646635       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:11:17.648709       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:11:17.649262       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:11:17.649594       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1024 19:11:17.649669       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1024 19:11:17.651814       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:11:17.652061       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:11:17.652248       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:11:17.652389       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:11:17.652505       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:11:17.652719       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:11:17.652840       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:11:17.652925       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:11:17.653024       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:11:17.658412       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:11:17.659972       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:11:17.660272       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:11:18.509707       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:11:18.514589       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:11:18.570492       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:11:18.769907       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1024 19:11:20.149511       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1024 19:11:36.961632       1 factory.go:503] pod: kube-system/coredns-66bff467f8-xgvnq is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 24 19:14:31 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:31.762747    1872 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:14:31 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:31.762784    1872 pod_workers.go:191] Error syncing pod 3a8a05ca-983f-459a-861b-f4a4c2aeef37 ("kube-ingress-dns-minikube_kube-system(3a8a05ca-983f-459a-861b-f4a4c2aeef37)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 24 19:14:45 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:45.762782    1872 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:14:45 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:45.762969    1872 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:14:45 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:45.763141    1872 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:14:45 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:45.763192    1872 pod_workers.go:191] Error syncing pod 3a8a05ca-983f-459a-861b-f4a4c2aeef37 ("kube-ingress-dns-minikube_kube-system(3a8a05ca-983f-459a-861b-f4a4c2aeef37)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 24 19:14:49 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:14:49.919764    1872 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 24 19:14:50 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:14:50.070758    1872 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-xdk2j" (UniqueName: "kubernetes.io/secret/ba533391-6d9a-41c7-88a3-8f42effcc169-default-token-xdk2j") pod "hello-world-app-5f5d8b66bb-6nb7v" (UID: "ba533391-6d9a-41c7-88a3-8f42effcc169")
	Oct 24 19:14:50 ingress-addon-legacy-462645 kubelet[1872]: W1024 19:14:50.585951    1872 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/273a3aa1a5fc6cbd3706abc673be58f2e1d22f67c15d0ba6f683f373becd3358/crio-49345a224edd8420c894183c1914d8792ca0897939a2cbf465d8bc6e8c4b58f6 WatchSource:0}: Error finding container 49345a224edd8420c894183c1914d8792ca0897939a2cbf465d8bc6e8c4b58f6: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc000db1d40 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Oct 24 19:14:56 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:56.762244    1872 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:14:56 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:56.762288    1872 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:14:56 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:56.762342    1872 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:14:56 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:14:56.762377    1872 pod_workers.go:191] Error syncing pod 3a8a05ca-983f-459a-861b-f4a4c2aeef37 ("kube-ingress-dns-minikube_kube-system(3a8a05ca-983f-459a-861b-f4a4c2aeef37)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 24 19:15:05 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:05.885808    1872 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-6nhnl" (UniqueName: "kubernetes.io/secret/3a8a05ca-983f-459a-861b-f4a4c2aeef37-minikube-ingress-dns-token-6nhnl") pod "3a8a05ca-983f-459a-861b-f4a4c2aeef37" (UID: "3a8a05ca-983f-459a-861b-f4a4c2aeef37")
	Oct 24 19:15:05 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:05.888153    1872 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a8a05ca-983f-459a-861b-f4a4c2aeef37-minikube-ingress-dns-token-6nhnl" (OuterVolumeSpecName: "minikube-ingress-dns-token-6nhnl") pod "3a8a05ca-983f-459a-861b-f4a4c2aeef37" (UID: "3a8a05ca-983f-459a-861b-f4a4c2aeef37"). InnerVolumeSpecName "minikube-ingress-dns-token-6nhnl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:15:05 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:05.986226    1872 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-6nhnl" (UniqueName: "kubernetes.io/secret/3a8a05ca-983f-459a-861b-f4a4c2aeef37-minikube-ingress-dns-token-6nhnl") on node "ingress-addon-legacy-462645" DevicePath ""
	Oct 24 19:15:08 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:15:08.263066    1872 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-52jf9.17912114ed65f010", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-52jf9", UID:"a7c868dd-9354-48a6-8531-87c804f3b41a", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-462645"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14623cf0f939810, ext:227975265522, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14623cf0f939810, ext:227975265522, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-52jf9.17912114ed65f010" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 24 19:15:08 ingress-addon-legacy-462645 kubelet[1872]: E1024 19:15:08.267898    1872 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-52jf9.17912114ed65f010", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-52jf9", UID:"a7c868dd-9354-48a6-8531-87c804f3b41a", APIVersion:"v1", ResourceVersion:"472", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-462645"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14623cf0f939810, ext:227975265522, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14623cf0fbe6cc1, ext:227978072488, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-52jf9.17912114ed65f010" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 24 19:15:11 ingress-addon-legacy-462645 kubelet[1872]: W1024 19:15:11.393965    1872 pod_container_deletor.go:77] Container "18c2e0e94d7fa1f84c9a830feca0cc3a90e6b3c330a5401707b830855f859fc5" not found in pod's containers
	Oct 24 19:15:12 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:12.407766    1872 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-cjg4g" (UniqueName: "kubernetes.io/secret/a7c868dd-9354-48a6-8531-87c804f3b41a-ingress-nginx-token-cjg4g") pod "a7c868dd-9354-48a6-8531-87c804f3b41a" (UID: "a7c868dd-9354-48a6-8531-87c804f3b41a")
	Oct 24 19:15:12 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:12.407810    1872 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a7c868dd-9354-48a6-8531-87c804f3b41a-webhook-cert") pod "a7c868dd-9354-48a6-8531-87c804f3b41a" (UID: "a7c868dd-9354-48a6-8531-87c804f3b41a")
	Oct 24 19:15:12 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:12.410058    1872 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c868dd-9354-48a6-8531-87c804f3b41a-ingress-nginx-token-cjg4g" (OuterVolumeSpecName: "ingress-nginx-token-cjg4g") pod "a7c868dd-9354-48a6-8531-87c804f3b41a" (UID: "a7c868dd-9354-48a6-8531-87c804f3b41a"). InnerVolumeSpecName "ingress-nginx-token-cjg4g". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:15:12 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:12.410072    1872 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c868dd-9354-48a6-8531-87c804f3b41a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7c868dd-9354-48a6-8531-87c804f3b41a" (UID: "a7c868dd-9354-48a6-8531-87c804f3b41a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 24 19:15:12 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:12.508115    1872 reconciler.go:319] Volume detached for volume "ingress-nginx-token-cjg4g" (UniqueName: "kubernetes.io/secret/a7c868dd-9354-48a6-8531-87c804f3b41a-ingress-nginx-token-cjg4g") on node "ingress-addon-legacy-462645" DevicePath ""
	Oct 24 19:15:12 ingress-addon-legacy-462645 kubelet[1872]: I1024 19:15:12.508155    1872 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a7c868dd-9354-48a6-8531-87c804f3b41a-webhook-cert") on node "ingress-addon-legacy-462645" DevicePath ""
	
	* 
	* ==> storage-provisioner [9acb836fdcdb5c72db1b998f847074b5e5290b6735b3cae7b0222c78747eb6f0] <==
	* I1024 19:11:56.254267       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:11:56.263313       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:11:56.263386       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:11:56.272073       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:11:56.272293       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-462645_ae329c5c-1c60-4921-8e18-c2ad4b4f87d0!
	I1024 19:11:56.272300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c7f618b-310c-4347-92e0-7128772a5fae", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-462645_ae329c5c-1c60-4921-8e18-c2ad4b4f87d0 became leader
	I1024 19:11:56.373340       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-462645_ae329c5c-1c60-4921-8e18-c2ad4b4f87d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-462645 -n ingress-addon-legacy-462645
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-462645 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (185.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-j2cch -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-j2cch -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-j2cch -- sh -c "ping -c 1 192.168.58.1": exit status 1 (206.031128ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-j2cch): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-px9mp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-px9mp -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-px9mp -- sh -c "ping -c 1 192.168.58.1": exit status 1 (221.077148ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-px9mp): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-961484
helpers_test.go:235: (dbg) docker inspect multinode-961484:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb",
	        "Created": "2023-10-24T19:20:40.258294704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 566187,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:20:40.556725252Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/hostname",
	        "HostsPath": "/var/lib/docker/containers/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/hosts",
	        "LogPath": "/var/lib/docker/containers/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb-json.log",
	        "Name": "/multinode-961484",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-961484:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-961484",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/15340a288a74bd16de0295579dc4521c40122828144ca66d51f9b3979c47acbb-init/diff:/var/lib/docker/overlay2/a59d6c70e56c008d6cc4bbed94412eb512943c9d608e3d99495b95d6ce6d39c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15340a288a74bd16de0295579dc4521c40122828144ca66d51f9b3979c47acbb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15340a288a74bd16de0295579dc4521c40122828144ca66d51f9b3979c47acbb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15340a288a74bd16de0295579dc4521c40122828144ca66d51f9b3979c47acbb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-961484",
	                "Source": "/var/lib/docker/volumes/multinode-961484/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-961484",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-961484",
	                "name.minikube.sigs.k8s.io": "multinode-961484",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a628b667e8a2ea6c88a07f05f6060a7482646a16c9ca4df2d7fa14d27dfb24f8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33270"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33269"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33266"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33268"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33267"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a628b667e8a2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-961484": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a82cc8c16283",
	                        "multinode-961484"
	                    ],
	                    "NetworkID": "fee0293b013f0fd00ddb402bad0ecab1e24aaf1ce07fabba8fb13e66a66043cb",
	                    "EndpointID": "4549c61a583ed5e18f6546e1ddcac4f0bf964a7ba5489f8d5558ec09cabc6570",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-961484 -n multinode-961484
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-961484 logs -n 25: (1.402998989s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-193912                           | mount-start-2-193912 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-193912 ssh -- ls                    | mount-start-2-193912 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-173651                           | mount-start-1-173651 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-193912 ssh -- ls                    | mount-start-2-193912 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-193912                           | mount-start-2-193912 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	| start   | -p mount-start-2-193912                           | mount-start-2-193912 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	| ssh     | mount-start-2-193912 ssh -- ls                    | mount-start-2-193912 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-193912                           | mount-start-2-193912 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	| delete  | -p mount-start-1-173651                           | mount-start-1-173651 | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:20 UTC |
	| start   | -p multinode-961484                               | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:20 UTC | 24 Oct 23 19:21 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- apply -f                   | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- rollout                    | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- get pods -o                | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- get pods -o                | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-j2cch --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-px9mp --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-j2cch --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-px9mp --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-j2cch -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-px9mp -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- get pods -o                | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-j2cch                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC |                     |
	|         | busybox-5bc68d56bd-j2cch -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC | 24 Oct 23 19:21 UTC |
	|         | busybox-5bc68d56bd-px9mp                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-961484 -- exec                       | multinode-961484     | jenkins | v1.31.2 | 24 Oct 23 19:21 UTC |                     |
	|         | busybox-5bc68d56bd-px9mp -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:20:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:20:33.571042  565581 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:20:33.571180  565581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:20:33.571190  565581 out.go:309] Setting ErrFile to fd 2...
	I1024 19:20:33.571197  565581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:20:33.571422  565581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:20:33.572038  565581 out.go:303] Setting JSON to false
	I1024 19:20:33.573879  565581 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10981,"bootTime":1698164253,"procs":575,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:20:33.573960  565581 start.go:138] virtualization: kvm guest
	I1024 19:20:33.576884  565581 out.go:177] * [multinode-961484] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:20:33.579159  565581 notify.go:220] Checking for updates...
	I1024 19:20:33.579169  565581 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:20:33.581191  565581 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:20:33.583150  565581 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:20:33.584829  565581 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:20:33.586371  565581 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:20:33.587749  565581 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:20:33.589453  565581 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:20:33.619563  565581 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:20:33.619736  565581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:20:33.683768  565581 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-24 19:20:33.673865658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:20:33.683879  565581 docker.go:295] overlay module found
	I1024 19:20:33.686485  565581 out.go:177] * Using the docker driver based on user configuration
	I1024 19:20:33.688445  565581 start.go:298] selected driver: docker
	I1024 19:20:33.688467  565581 start.go:902] validating driver "docker" against <nil>
	I1024 19:20:33.688479  565581 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:20:33.689406  565581 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:20:33.750749  565581 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-24 19:20:33.740177499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:20:33.750973  565581 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:20:33.751324  565581 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:20:33.753814  565581 out.go:177] * Using Docker driver with root privileges
	I1024 19:20:33.755415  565581 cni.go:84] Creating CNI manager for ""
	I1024 19:20:33.755444  565581 cni.go:136] 0 nodes found, recommending kindnet
	I1024 19:20:33.755468  565581 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:20:33.755491  565581 start_flags.go:323] config:
	{Name:multinode-961484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-961484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:20:33.757648  565581 out.go:177] * Starting control plane node multinode-961484 in cluster multinode-961484
	I1024 19:20:33.759480  565581 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:20:33.761321  565581 out.go:177] * Pulling base image ...
	I1024 19:20:33.763140  565581 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:20:33.763193  565581 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:20:33.763220  565581 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:20:33.763236  565581 cache.go:57] Caching tarball of preloaded images
	I1024 19:20:33.763352  565581 preload.go:174] Found /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:20:33.763367  565581 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:20:33.763806  565581 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/config.json ...
	I1024 19:20:33.763844  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/config.json: {Name:mk64766667c049ba78df0b77bf595e007046a8f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:33.780642  565581 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:20:33.780680  565581 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:20:33.780704  565581 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:20:33.780743  565581 start.go:365] acquiring machines lock for multinode-961484: {Name:mk1702cc8a7c401a5da2ea5e2079ef5c26a84aa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:20:33.780887  565581 start.go:369] acquired machines lock for "multinode-961484" in 116.982µs
	I1024 19:20:33.780914  565581 start.go:93] Provisioning new machine with config: &{Name:multinode-961484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-961484 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:20:33.781050  565581 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:20:33.783204  565581 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1024 19:20:33.783533  565581 start.go:159] libmachine.API.Create for "multinode-961484" (driver="docker")
	I1024 19:20:33.783581  565581 client.go:168] LocalClient.Create starting
	I1024 19:20:33.783670  565581 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem
	I1024 19:20:33.783711  565581 main.go:141] libmachine: Decoding PEM data...
	I1024 19:20:33.783736  565581 main.go:141] libmachine: Parsing certificate...
	I1024 19:20:33.783796  565581 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem
	I1024 19:20:33.783831  565581 main.go:141] libmachine: Decoding PEM data...
	I1024 19:20:33.783848  565581 main.go:141] libmachine: Parsing certificate...
	I1024 19:20:33.784258  565581 cli_runner.go:164] Run: docker network inspect multinode-961484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:20:33.801395  565581 cli_runner.go:211] docker network inspect multinode-961484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:20:33.801463  565581 network_create.go:281] running [docker network inspect multinode-961484] to gather additional debugging logs...
	I1024 19:20:33.801480  565581 cli_runner.go:164] Run: docker network inspect multinode-961484
	W1024 19:20:33.818376  565581 cli_runner.go:211] docker network inspect multinode-961484 returned with exit code 1
	I1024 19:20:33.818405  565581 network_create.go:284] error running [docker network inspect multinode-961484]: docker network inspect multinode-961484: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-961484 not found
	I1024 19:20:33.818417  565581 network_create.go:286] output of [docker network inspect multinode-961484]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-961484 not found
	
	** /stderr **
	I1024 19:20:33.818588  565581 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:20:33.835044  565581 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7cb31ca22f4a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:85:f3:ac:06} reservation:<nil>}
	I1024 19:20:33.835540  565581 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002864980}
	I1024 19:20:33.835582  565581 network_create.go:124] attempt to create docker network multinode-961484 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1024 19:20:33.835628  565581 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-961484 multinode-961484
	I1024 19:20:33.903165  565581 network_create.go:108] docker network multinode-961484 192.168.58.0/24 created
	I1024 19:20:33.903200  565581 kic.go:118] calculated static IP "192.168.58.2" for the "multinode-961484" container
	I1024 19:20:33.903292  565581 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:20:33.925773  565581 cli_runner.go:164] Run: docker volume create multinode-961484 --label name.minikube.sigs.k8s.io=multinode-961484 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:20:33.949992  565581 oci.go:103] Successfully created a docker volume multinode-961484
	I1024 19:20:33.950095  565581 cli_runner.go:164] Run: docker run --rm --name multinode-961484-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-961484 --entrypoint /usr/bin/test -v multinode-961484:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:20:34.510176  565581 oci.go:107] Successfully prepared a docker volume multinode-961484
	I1024 19:20:34.510227  565581 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:20:34.510254  565581 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:20:34.510351  565581 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-961484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:20:40.188972  565581 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-961484:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.678560334s)
	I1024 19:20:40.189020  565581 kic.go:200] duration metric: took 5.678755 seconds to extract preloaded images to volume
	W1024 19:20:40.189249  565581 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:20:40.189436  565581 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:20:40.243426  565581 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-961484 --name multinode-961484 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-961484 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-961484 --network multinode-961484 --ip 192.168.58.2 --volume multinode-961484:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:20:40.566688  565581 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Running}}
	I1024 19:20:40.591100  565581 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:20:40.613564  565581 cli_runner.go:164] Run: docker exec multinode-961484 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:20:40.688753  565581 oci.go:144] the created container "multinode-961484" has a running status.
	I1024 19:20:40.688819  565581 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa...
	I1024 19:20:40.899209  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1024 19:20:40.899289  565581 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:20:40.926499  565581 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:20:40.947033  565581 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:20:40.947060  565581 kic_runner.go:114] Args: [docker exec --privileged multinode-961484 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:20:41.030459  565581 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:20:41.049185  565581 machine.go:88] provisioning docker machine ...
	I1024 19:20:41.049231  565581 ubuntu.go:169] provisioning hostname "multinode-961484"
	I1024 19:20:41.049316  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:41.070674  565581 main.go:141] libmachine: Using SSH client type: native
	I1024 19:20:41.071296  565581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33270 <nil> <nil>}
	I1024 19:20:41.071320  565581 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-961484 && echo "multinode-961484" | sudo tee /etc/hostname
	I1024 19:20:41.072182  565581 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50776->127.0.0.1:33270: read: connection reset by peer
	I1024 19:20:44.212979  565581 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-961484
	
	I1024 19:20:44.213143  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:44.232340  565581 main.go:141] libmachine: Using SSH client type: native
	I1024 19:20:44.232758  565581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33270 <nil> <nil>}
	I1024 19:20:44.232805  565581 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-961484' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-961484/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-961484' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:20:44.357253  565581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:20:44.357281  565581 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-471553/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-471553/.minikube}
	I1024 19:20:44.357300  565581 ubuntu.go:177] setting up certificates
	I1024 19:20:44.357312  565581 provision.go:83] configureAuth start
	I1024 19:20:44.357365  565581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484
	I1024 19:20:44.380716  565581 provision.go:138] copyHostCerts
	I1024 19:20:44.380799  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:20:44.380846  565581 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem, removing ...
	I1024 19:20:44.380855  565581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:20:44.380944  565581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem (1082 bytes)
	I1024 19:20:44.381057  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:20:44.381078  565581 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem, removing ...
	I1024 19:20:44.381083  565581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:20:44.381113  565581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem (1123 bytes)
	I1024 19:20:44.381178  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:20:44.381198  565581 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem, removing ...
	I1024 19:20:44.381202  565581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:20:44.381241  565581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem (1675 bytes)
	I1024 19:20:44.381323  565581 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem org=jenkins.multinode-961484 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-961484]
	I1024 19:20:44.442203  565581 provision.go:172] copyRemoteCerts
	I1024 19:20:44.442279  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:20:44.442326  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:44.464931  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:20:44.555150  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:20:44.555230  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:20:44.579076  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:20:44.579137  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1024 19:20:44.603372  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:20:44.603458  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:20:44.626941  565581 provision.go:86] duration metric: configureAuth took 269.616159ms
	I1024 19:20:44.626973  565581 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:20:44.627192  565581 config.go:182] Loaded profile config "multinode-961484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:20:44.627299  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:44.646136  565581 main.go:141] libmachine: Using SSH client type: native
	I1024 19:20:44.646497  565581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33270 <nil> <nil>}
	I1024 19:20:44.646515  565581 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:20:44.876471  565581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:20:44.876502  565581 machine.go:91] provisioned docker machine in 3.827287852s
	I1024 19:20:44.876513  565581 client.go:171] LocalClient.Create took 11.092915285s
	I1024 19:20:44.876542  565581 start.go:167] duration metric: libmachine.API.Create for "multinode-961484" took 11.09301268s
	I1024 19:20:44.876550  565581 start.go:300] post-start starting for "multinode-961484" (driver="docker")
	I1024 19:20:44.876559  565581 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:20:44.876623  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:20:44.876661  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:44.895702  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:20:44.986635  565581 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:20:44.990169  565581 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1024 19:20:44.990195  565581 command_runner.go:130] > NAME="Ubuntu"
	I1024 19:20:44.990209  565581 command_runner.go:130] > VERSION_ID="22.04"
	I1024 19:20:44.990215  565581 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1024 19:20:44.990220  565581 command_runner.go:130] > VERSION_CODENAME=jammy
	I1024 19:20:44.990224  565581 command_runner.go:130] > ID=ubuntu
	I1024 19:20:44.990228  565581 command_runner.go:130] > ID_LIKE=debian
	I1024 19:20:44.990232  565581 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1024 19:20:44.990236  565581 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1024 19:20:44.990247  565581 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1024 19:20:44.990254  565581 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1024 19:20:44.990258  565581 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1024 19:20:44.990309  565581 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:20:44.990331  565581 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:20:44.990342  565581 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:20:44.990351  565581 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:20:44.990363  565581 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/addons for local assets ...
	I1024 19:20:44.990427  565581 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/files for local assets ...
	I1024 19:20:44.990497  565581 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> 4783232.pem in /etc/ssl/certs
	I1024 19:20:44.990506  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> /etc/ssl/certs/4783232.pem
	I1024 19:20:44.990588  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:20:45.000602  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:20:45.027219  565581 start.go:303] post-start completed in 150.652872ms
	I1024 19:20:45.027795  565581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484
	I1024 19:20:45.048938  565581 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/config.json ...
	I1024 19:20:45.049256  565581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:20:45.049303  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:45.072360  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:20:45.162089  565581 command_runner.go:130] > 21%!
	(MISSING)I1024 19:20:45.162172  565581 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:20:45.166679  565581 command_runner.go:130] > 233G
	I1024 19:20:45.166711  565581 start.go:128] duration metric: createHost completed in 11.385650863s
	I1024 19:20:45.166723  565581 start.go:83] releasing machines lock for "multinode-961484", held for 11.385821641s
	I1024 19:20:45.166781  565581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484
	I1024 19:20:45.185730  565581 ssh_runner.go:195] Run: cat /version.json
	I1024 19:20:45.185808  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:45.185811  565581 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:20:45.185989  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:20:45.206184  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:20:45.207187  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:20:45.393482  565581 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 19:20:45.396606  565581 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1024 19:20:45.396744  565581 ssh_runner.go:195] Run: systemctl --version
	I1024 19:20:45.401438  565581 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1024 19:20:45.401479  565581 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1024 19:20:45.401531  565581 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:20:45.543961  565581 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:20:45.548258  565581 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1024 19:20:45.548283  565581 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1024 19:20:45.548289  565581 command_runner.go:130] > Device: 37h/55d	Inode: 2845527     Links: 1
	I1024 19:20:45.548295  565581 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:20:45.548302  565581 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1024 19:20:45.548307  565581 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1024 19:20:45.548320  565581 command_runner.go:130] > Change: 2023-10-24 19:00:55.078905836 +0000
	I1024 19:20:45.548326  565581 command_runner.go:130] >  Birth: 2023-10-24 19:00:55.078905836 +0000
	I1024 19:20:45.548575  565581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:20:45.572473  565581 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:20:45.572583  565581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:20:45.611442  565581 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1024 19:20:45.611506  565581 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:20:45.611536  565581 start.go:472] detecting cgroup driver to use...
	I1024 19:20:45.611585  565581 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:20:45.611639  565581 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:20:45.630865  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:20:45.645055  565581 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:20:45.645125  565581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:20:45.659826  565581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:20:45.673839  565581 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:20:45.759398  565581 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:20:45.773611  565581 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1024 19:20:45.840403  565581 docker.go:214] disabling docker service ...
	I1024 19:20:45.840473  565581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:20:45.862976  565581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:20:45.876434  565581 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:20:45.888997  565581 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1024 19:20:45.961170  565581 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:20:45.972943  565581 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1024 19:20:46.045402  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:20:46.056452  565581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:20:46.070735  565581 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 19:20:46.071490  565581 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:20:46.071553  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:20:46.080608  565581 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:20:46.080681  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:20:46.089709  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:20:46.099445  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:20:46.108986  565581 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:20:46.119775  565581 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:20:46.127840  565581 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1024 19:20:46.128656  565581 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:20:46.140937  565581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:20:46.220264  565581 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:20:46.313935  565581 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:20:46.314003  565581 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:20:46.317407  565581 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 19:20:46.317432  565581 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 19:20:46.317438  565581 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I1024 19:20:46.317445  565581 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:20:46.317451  565581 command_runner.go:130] > Access: 2023-10-24 19:20:46.297199447 +0000
	I1024 19:20:46.317473  565581 command_runner.go:130] > Modify: 2023-10-24 19:20:46.297199447 +0000
	I1024 19:20:46.317490  565581 command_runner.go:130] > Change: 2023-10-24 19:20:46.297199447 +0000
	I1024 19:20:46.317496  565581 command_runner.go:130] >  Birth: -
	I1024 19:20:46.317526  565581 start.go:540] Will wait 60s for crictl version
	I1024 19:20:46.317566  565581 ssh_runner.go:195] Run: which crictl
	I1024 19:20:46.320935  565581 command_runner.go:130] > /usr/bin/crictl
	I1024 19:20:46.321067  565581 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:20:46.356154  565581 command_runner.go:130] > Version:  0.1.0
	I1024 19:20:46.356184  565581 command_runner.go:130] > RuntimeName:  cri-o
	I1024 19:20:46.356199  565581 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1024 19:20:46.356220  565581 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 19:20:46.356244  565581 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:20:46.356309  565581 ssh_runner.go:195] Run: crio --version
	I1024 19:20:46.393004  565581 command_runner.go:130] > crio version 1.24.6
	I1024 19:20:46.393030  565581 command_runner.go:130] > Version:          1.24.6
	I1024 19:20:46.393037  565581 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 19:20:46.393041  565581 command_runner.go:130] > GitTreeState:     clean
	I1024 19:20:46.393047  565581 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 19:20:46.393052  565581 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 19:20:46.393056  565581 command_runner.go:130] > Compiler:         gc
	I1024 19:20:46.393061  565581 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:20:46.393066  565581 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:20:46.393073  565581 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:20:46.393077  565581 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:20:46.393081  565581 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:20:46.393146  565581 ssh_runner.go:195] Run: crio --version
	I1024 19:20:46.433372  565581 command_runner.go:130] > crio version 1.24.6
	I1024 19:20:46.433404  565581 command_runner.go:130] > Version:          1.24.6
	I1024 19:20:46.433420  565581 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 19:20:46.433428  565581 command_runner.go:130] > GitTreeState:     clean
	I1024 19:20:46.433440  565581 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 19:20:46.433458  565581 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 19:20:46.433468  565581 command_runner.go:130] > Compiler:         gc
	I1024 19:20:46.433476  565581 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:20:46.433485  565581 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:20:46.433500  565581 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:20:46.433511  565581 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:20:46.433530  565581 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:20:46.438087  565581 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 19:20:46.440023  565581 cli_runner.go:164] Run: docker network inspect multinode-961484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:20:46.459708  565581 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1024 19:20:46.463667  565581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:20:46.476327  565581 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:20:46.476391  565581 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:20:46.534445  565581 command_runner.go:130] > {
	I1024 19:20:46.534465  565581 command_runner.go:130] >   "images": [
	I1024 19:20:46.534470  565581 command_runner.go:130] >     {
	I1024 19:20:46.534477  565581 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1024 19:20:46.534484  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.534490  565581 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1024 19:20:46.534493  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534499  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.534511  565581 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1024 19:20:46.534522  565581 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1024 19:20:46.534531  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534539  565581 command_runner.go:130] >       "size": "65258016",
	I1024 19:20:46.534548  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.534553  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.534559  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.534564  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.534567  565581 command_runner.go:130] >     },
	I1024 19:20:46.534576  565581 command_runner.go:130] >     {
	I1024 19:20:46.534585  565581 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1024 19:20:46.534589  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.534596  565581 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1024 19:20:46.534605  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534612  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.534628  565581 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1024 19:20:46.534641  565581 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1024 19:20:46.534651  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534663  565581 command_runner.go:130] >       "size": "31470524",
	I1024 19:20:46.534670  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.534675  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.534680  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.534684  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.534690  565581 command_runner.go:130] >     },
	I1024 19:20:46.534694  565581 command_runner.go:130] >     {
	I1024 19:20:46.534700  565581 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1024 19:20:46.534709  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.534723  565581 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1024 19:20:46.534733  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534740  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.534756  565581 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1024 19:20:46.534771  565581 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1024 19:20:46.534778  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534782  565581 command_runner.go:130] >       "size": "53621675",
	I1024 19:20:46.534789  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.534793  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.534797  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.534804  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.534807  565581 command_runner.go:130] >     },
	I1024 19:20:46.534816  565581 command_runner.go:130] >     {
	I1024 19:20:46.534826  565581 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1024 19:20:46.534837  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.534846  565581 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1024 19:20:46.534856  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534864  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.534882  565581 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1024 19:20:46.534896  565581 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1024 19:20:46.534909  565581 command_runner.go:130] >       ],
	I1024 19:20:46.534917  565581 command_runner.go:130] >       "size": "295456551",
	I1024 19:20:46.534923  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.534930  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.534941  565581 command_runner.go:130] >       },
	I1024 19:20:46.534948  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.534958  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.534968  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.534975  565581 command_runner.go:130] >     },
	I1024 19:20:46.534984  565581 command_runner.go:130] >     {
	I1024 19:20:46.534993  565581 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1024 19:20:46.535000  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.535009  565581 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1024 19:20:46.535018  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535025  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.535041  565581 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1024 19:20:46.535064  565581 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1024 19:20:46.535074  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535082  565581 command_runner.go:130] >       "size": "127165392",
	I1024 19:20:46.535090  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.535094  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.535102  565581 command_runner.go:130] >       },
	I1024 19:20:46.535110  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.535120  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.535128  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.535137  565581 command_runner.go:130] >     },
	I1024 19:20:46.535144  565581 command_runner.go:130] >     {
	I1024 19:20:46.535158  565581 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1024 19:20:46.535168  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.535181  565581 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1024 19:20:46.535194  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535204  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.535221  565581 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1024 19:20:46.535237  565581 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1024 19:20:46.535250  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535261  565581 command_runner.go:130] >       "size": "123188534",
	I1024 19:20:46.535270  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.535276  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.535282  565581 command_runner.go:130] >       },
	I1024 19:20:46.535288  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.535299  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.535315  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.535324  565581 command_runner.go:130] >     },
	I1024 19:20:46.535331  565581 command_runner.go:130] >     {
	I1024 19:20:46.535345  565581 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1024 19:20:46.535355  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.535364  565581 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1024 19:20:46.535368  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535376  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.535391  565581 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1024 19:20:46.535407  565581 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1024 19:20:46.535416  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535428  565581 command_runner.go:130] >       "size": "74691991",
	I1024 19:20:46.535438  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.535449  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.535456  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.535466  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.535476  565581 command_runner.go:130] >     },
	I1024 19:20:46.535482  565581 command_runner.go:130] >     {
	I1024 19:20:46.535495  565581 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1024 19:20:46.535506  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.535515  565581 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1024 19:20:46.535524  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535531  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.535603  565581 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1024 19:20:46.535621  565581 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1024 19:20:46.535627  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535635  565581 command_runner.go:130] >       "size": "61498678",
	I1024 19:20:46.535646  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.535653  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.535662  565581 command_runner.go:130] >       },
	I1024 19:20:46.535672  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.535679  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.535689  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.535694  565581 command_runner.go:130] >     },
	I1024 19:20:46.535704  565581 command_runner.go:130] >     {
	I1024 19:20:46.535714  565581 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1024 19:20:46.535723  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.535731  565581 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1024 19:20:46.535736  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535748  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.535761  565581 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1024 19:20:46.535775  565581 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1024 19:20:46.535785  565581 command_runner.go:130] >       ],
	I1024 19:20:46.535791  565581 command_runner.go:130] >       "size": "750414",
	I1024 19:20:46.535799  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.535805  565581 command_runner.go:130] >         "value": "65535"
	I1024 19:20:46.535815  565581 command_runner.go:130] >       },
	I1024 19:20:46.535828  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.535838  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.535849  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.535857  565581 command_runner.go:130] >     }
	I1024 19:20:46.535866  565581 command_runner.go:130] >   ]
	I1024 19:20:46.535874  565581 command_runner.go:130] > }
	I1024 19:20:46.536923  565581 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:20:46.536945  565581 crio.go:415] Images already preloaded, skipping extraction
	I1024 19:20:46.536991  565581 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:20:46.571909  565581 command_runner.go:130] > {
	I1024 19:20:46.571931  565581 command_runner.go:130] >   "images": [
	I1024 19:20:46.571936  565581 command_runner.go:130] >     {
	I1024 19:20:46.571944  565581 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1024 19:20:46.571949  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.571959  565581 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1024 19:20:46.571963  565581 command_runner.go:130] >       ],
	I1024 19:20:46.571968  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.571977  565581 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1024 19:20:46.571994  565581 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1024 19:20:46.571999  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572017  565581 command_runner.go:130] >       "size": "65258016",
	I1024 19:20:46.572021  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.572025  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572038  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572042  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572046  565581 command_runner.go:130] >     },
	I1024 19:20:46.572050  565581 command_runner.go:130] >     {
	I1024 19:20:46.572057  565581 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1024 19:20:46.572062  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572068  565581 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1024 19:20:46.572072  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572076  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.572084  565581 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1024 19:20:46.572092  565581 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1024 19:20:46.572096  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572108  565581 command_runner.go:130] >       "size": "31470524",
	I1024 19:20:46.572112  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.572117  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572125  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572129  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572133  565581 command_runner.go:130] >     },
	I1024 19:20:46.572137  565581 command_runner.go:130] >     {
	I1024 19:20:46.572143  565581 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1024 19:20:46.572148  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572156  565581 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1024 19:20:46.572161  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572166  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.572174  565581 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1024 19:20:46.572187  565581 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1024 19:20:46.572192  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572203  565581 command_runner.go:130] >       "size": "53621675",
	I1024 19:20:46.572210  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.572216  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572220  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572224  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572228  565581 command_runner.go:130] >     },
	I1024 19:20:46.572234  565581 command_runner.go:130] >     {
	I1024 19:20:46.572241  565581 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1024 19:20:46.572245  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572251  565581 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1024 19:20:46.572254  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572258  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.572265  565581 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1024 19:20:46.572272  565581 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1024 19:20:46.572283  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572288  565581 command_runner.go:130] >       "size": "295456551",
	I1024 19:20:46.572292  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.572296  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.572300  565581 command_runner.go:130] >       },
	I1024 19:20:46.572304  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572308  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572314  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572318  565581 command_runner.go:130] >     },
	I1024 19:20:46.572322  565581 command_runner.go:130] >     {
	I1024 19:20:46.572337  565581 command_runner.go:130] >       "id": "53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076",
	I1024 19:20:46.572344  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572350  565581 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1024 19:20:46.572354  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572358  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.572377  565581 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab",
	I1024 19:20:46.572385  565581 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1024 19:20:46.572392  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572397  565581 command_runner.go:130] >       "size": "127165392",
	I1024 19:20:46.572401  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.572405  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.572409  565581 command_runner.go:130] >       },
	I1024 19:20:46.572414  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572421  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572425  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572429  565581 command_runner.go:130] >     },
	I1024 19:20:46.572433  565581 command_runner.go:130] >     {
	I1024 19:20:46.572440  565581 command_runner.go:130] >       "id": "10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3",
	I1024 19:20:46.572449  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572455  565581 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1024 19:20:46.572462  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572467  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.572478  565581 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1024 19:20:46.572489  565581 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"
	I1024 19:20:46.572492  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572497  565581 command_runner.go:130] >       "size": "123188534",
	I1024 19:20:46.572501  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.572506  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.572512  565581 command_runner.go:130] >       },
	I1024 19:20:46.572516  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572521  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572528  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572538  565581 command_runner.go:130] >     },
	I1024 19:20:46.572544  565581 command_runner.go:130] >     {
	I1024 19:20:46.572555  565581 command_runner.go:130] >       "id": "bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf",
	I1024 19:20:46.572562  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572577  565581 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1024 19:20:46.572584  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572588  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.572596  565581 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8",
	I1024 19:20:46.572606  565581 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1024 19:20:46.572610  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572615  565581 command_runner.go:130] >       "size": "74691991",
	I1024 19:20:46.572622  565581 command_runner.go:130] >       "uid": null,
	I1024 19:20:46.572626  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572630  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572635  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572642  565581 command_runner.go:130] >     },
	I1024 19:20:46.572647  565581 command_runner.go:130] >     {
	I1024 19:20:46.572659  565581 command_runner.go:130] >       "id": "6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4",
	I1024 19:20:46.572727  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572763  565581 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1024 19:20:46.572767  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572808  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.572911  565581 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1024 19:20:46.572926  565581 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"
	I1024 19:20:46.572930  565581 command_runner.go:130] >       ],
	I1024 19:20:46.572936  565581 command_runner.go:130] >       "size": "61498678",
	I1024 19:20:46.572940  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.572944  565581 command_runner.go:130] >         "value": "0"
	I1024 19:20:46.572948  565581 command_runner.go:130] >       },
	I1024 19:20:46.572952  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.572959  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.572964  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.572969  565581 command_runner.go:130] >     },
	I1024 19:20:46.572972  565581 command_runner.go:130] >     {
	I1024 19:20:46.572979  565581 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1024 19:20:46.572986  565581 command_runner.go:130] >       "repoTags": [
	I1024 19:20:46.572994  565581 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1024 19:20:46.573000  565581 command_runner.go:130] >       ],
	I1024 19:20:46.573010  565581 command_runner.go:130] >       "repoDigests": [
	I1024 19:20:46.573027  565581 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1024 19:20:46.573042  565581 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1024 19:20:46.573047  565581 command_runner.go:130] >       ],
	I1024 19:20:46.573051  565581 command_runner.go:130] >       "size": "750414",
	I1024 19:20:46.573056  565581 command_runner.go:130] >       "uid": {
	I1024 19:20:46.573061  565581 command_runner.go:130] >         "value": "65535"
	I1024 19:20:46.573068  565581 command_runner.go:130] >       },
	I1024 19:20:46.573073  565581 command_runner.go:130] >       "username": "",
	I1024 19:20:46.573080  565581 command_runner.go:130] >       "spec": null,
	I1024 19:20:46.573090  565581 command_runner.go:130] >       "pinned": false
	I1024 19:20:46.573094  565581 command_runner.go:130] >     }
	I1024 19:20:46.573100  565581 command_runner.go:130] >   ]
	I1024 19:20:46.573104  565581 command_runner.go:130] > }
	I1024 19:20:46.575233  565581 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:20:46.575267  565581 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:20:46.575382  565581 ssh_runner.go:195] Run: crio config
	I1024 19:20:46.617327  565581 command_runner.go:130] ! time="2023-10-24 19:20:46.616770425Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1024 19:20:46.617366  565581 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 19:20:46.622968  565581 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 19:20:46.623002  565581 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 19:20:46.623010  565581 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 19:20:46.623019  565581 command_runner.go:130] > #
	I1024 19:20:46.623027  565581 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 19:20:46.623034  565581 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 19:20:46.623043  565581 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 19:20:46.623056  565581 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 19:20:46.623063  565581 command_runner.go:130] > # reload'.
	I1024 19:20:46.623073  565581 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 19:20:46.623090  565581 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 19:20:46.623111  565581 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 19:20:46.623118  565581 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 19:20:46.623121  565581 command_runner.go:130] > [crio]
	I1024 19:20:46.623127  565581 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 19:20:46.623132  565581 command_runner.go:130] > # containers images, in this directory.
	I1024 19:20:46.623141  565581 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1024 19:20:46.623151  565581 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 19:20:46.623160  565581 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1024 19:20:46.623172  565581 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 19:20:46.623185  565581 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 19:20:46.623196  565581 command_runner.go:130] > # storage_driver = "vfs"
	I1024 19:20:46.623208  565581 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 19:20:46.623217  565581 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 19:20:46.623230  565581 command_runner.go:130] > # storage_option = [
	I1024 19:20:46.623239  565581 command_runner.go:130] > # ]
	I1024 19:20:46.623253  565581 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 19:20:46.623268  565581 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 19:20:46.623281  565581 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 19:20:46.623306  565581 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 19:20:46.623319  565581 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 19:20:46.623326  565581 command_runner.go:130] > # always happen on a node reboot
	I1024 19:20:46.623334  565581 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 19:20:46.623348  565581 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 19:20:46.623360  565581 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 19:20:46.623381  565581 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 19:20:46.623398  565581 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 19:20:46.623408  565581 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 19:20:46.623422  565581 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 19:20:46.623433  565581 command_runner.go:130] > # internal_wipe = true
	I1024 19:20:46.623452  565581 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 19:20:46.623466  565581 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 19:20:46.623482  565581 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 19:20:46.623494  565581 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 19:20:46.623504  565581 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 19:20:46.623514  565581 command_runner.go:130] > [crio.api]
	I1024 19:20:46.623540  565581 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 19:20:46.623553  565581 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 19:20:46.623562  565581 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 19:20:46.623574  565581 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 19:20:46.623588  565581 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 19:20:46.623601  565581 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 19:20:46.623611  565581 command_runner.go:130] > # stream_port = "0"
	I1024 19:20:46.623621  565581 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 19:20:46.623629  565581 command_runner.go:130] > # stream_enable_tls = false
	I1024 19:20:46.623643  565581 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 19:20:46.623654  565581 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 19:20:46.623665  565581 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 19:20:46.623679  565581 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 19:20:46.623689  565581 command_runner.go:130] > # minutes.
	I1024 19:20:46.623705  565581 command_runner.go:130] > # stream_tls_cert = ""
	I1024 19:20:46.623718  565581 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 19:20:46.623728  565581 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 19:20:46.623743  565581 command_runner.go:130] > # stream_tls_key = ""
	I1024 19:20:46.623758  565581 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 19:20:46.623773  565581 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 19:20:46.623786  565581 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 19:20:46.623797  565581 command_runner.go:130] > # stream_tls_ca = ""
	I1024 19:20:46.623812  565581 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:20:46.623822  565581 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1024 19:20:46.623833  565581 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:20:46.623845  565581 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1024 19:20:46.623892  565581 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 19:20:46.623907  565581 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 19:20:46.623914  565581 command_runner.go:130] > [crio.runtime]
	I1024 19:20:46.623925  565581 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 19:20:46.623935  565581 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 19:20:46.623946  565581 command_runner.go:130] > # "nofile=1024:2048"
	I1024 19:20:46.623966  565581 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 19:20:46.623977  565581 command_runner.go:130] > # default_ulimits = [
	I1024 19:20:46.623987  565581 command_runner.go:130] > # ]
	I1024 19:20:46.624001  565581 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 19:20:46.624011  565581 command_runner.go:130] > # no_pivot = false
	I1024 19:20:46.624024  565581 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 19:20:46.624037  565581 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 19:20:46.624049  565581 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 19:20:46.624062  565581 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 19:20:46.624075  565581 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 19:20:46.624090  565581 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:20:46.624100  565581 command_runner.go:130] > # conmon = ""
	I1024 19:20:46.624111  565581 command_runner.go:130] > # Cgroup setting for conmon
	I1024 19:20:46.624129  565581 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 19:20:46.624142  565581 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 19:20:46.624157  565581 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 19:20:46.624171  565581 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 19:20:46.624185  565581 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:20:46.624200  565581 command_runner.go:130] > # conmon_env = [
	I1024 19:20:46.624210  565581 command_runner.go:130] > # ]
	I1024 19:20:46.624221  565581 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 19:20:46.624229  565581 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 19:20:46.624242  565581 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 19:20:46.624254  565581 command_runner.go:130] > # default_env = [
	I1024 19:20:46.624260  565581 command_runner.go:130] > # ]
	I1024 19:20:46.624274  565581 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 19:20:46.624284  565581 command_runner.go:130] > # selinux = false
	I1024 19:20:46.624297  565581 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 19:20:46.624323  565581 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 19:20:46.624334  565581 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 19:20:46.624345  565581 command_runner.go:130] > # seccomp_profile = ""
	I1024 19:20:46.624420  565581 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 19:20:46.624451  565581 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 19:20:46.624465  565581 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 19:20:46.624477  565581 command_runner.go:130] > # which might increase security.
	I1024 19:20:46.624489  565581 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1024 19:20:46.624511  565581 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 19:20:46.624527  565581 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 19:20:46.624538  565581 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 19:20:46.624566  565581 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 19:20:46.624579  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:20:46.624591  565581 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 19:20:46.624602  565581 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 19:20:46.624611  565581 command_runner.go:130] > # the cgroup blockio controller.
	I1024 19:20:46.624622  565581 command_runner.go:130] > # blockio_config_file = ""
	I1024 19:20:46.624638  565581 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 19:20:46.624650  565581 command_runner.go:130] > # irqbalance daemon.
	I1024 19:20:46.624663  565581 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 19:20:46.624677  565581 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 19:20:46.624690  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:20:46.624700  565581 command_runner.go:130] > # rdt_config_file = ""
	I1024 19:20:46.624708  565581 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 19:20:46.624717  565581 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 19:20:46.624732  565581 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 19:20:46.624752  565581 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 19:20:46.624787  565581 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 19:20:46.624802  565581 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 19:20:46.624810  565581 command_runner.go:130] > # will be added.
	I1024 19:20:46.624821  565581 command_runner.go:130] > # default_capabilities = [
	I1024 19:20:46.624831  565581 command_runner.go:130] > # 	"CHOWN",
	I1024 19:20:46.624842  565581 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 19:20:46.624852  565581 command_runner.go:130] > # 	"FSETID",
	I1024 19:20:46.624868  565581 command_runner.go:130] > # 	"FOWNER",
	I1024 19:20:46.624875  565581 command_runner.go:130] > # 	"SETGID",
	I1024 19:20:46.624881  565581 command_runner.go:130] > # 	"SETUID",
	I1024 19:20:46.624906  565581 command_runner.go:130] > # 	"SETPCAP",
	I1024 19:20:46.624918  565581 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 19:20:46.624925  565581 command_runner.go:130] > # 	"KILL",
	I1024 19:20:46.624935  565581 command_runner.go:130] > # ]
	I1024 19:20:46.624952  565581 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1024 19:20:46.624967  565581 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1024 19:20:46.624979  565581 command_runner.go:130] > # add_inheritable_capabilities = true
	I1024 19:20:46.624993  565581 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 19:20:46.625008  565581 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:20:46.625020  565581 command_runner.go:130] > # default_sysctls = [
	I1024 19:20:46.625026  565581 command_runner.go:130] > # ]
	I1024 19:20:46.625040  565581 command_runner.go:130] > # List of devices on the host that a
	I1024 19:20:46.625055  565581 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 19:20:46.625066  565581 command_runner.go:130] > # allowed_devices = [
	I1024 19:20:46.625077  565581 command_runner.go:130] > # 	"/dev/fuse",
	I1024 19:20:46.625087  565581 command_runner.go:130] > # ]
	I1024 19:20:46.625100  565581 command_runner.go:130] > # List of additional devices. specified as
	I1024 19:20:46.625167  565581 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 19:20:46.625181  565581 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 19:20:46.625192  565581 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:20:46.625203  565581 command_runner.go:130] > # additional_devices = [
	I1024 19:20:46.625214  565581 command_runner.go:130] > # ]
	I1024 19:20:46.625226  565581 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 19:20:46.625237  565581 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 19:20:46.625248  565581 command_runner.go:130] > # 	"/etc/cdi",
	I1024 19:20:46.625260  565581 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 19:20:46.625269  565581 command_runner.go:130] > # ]
	I1024 19:20:46.625284  565581 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 19:20:46.625300  565581 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 19:20:46.625311  565581 command_runner.go:130] > # Defaults to false.
	I1024 19:20:46.625324  565581 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 19:20:46.625338  565581 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 19:20:46.625349  565581 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 19:20:46.625375  565581 command_runner.go:130] > # hooks_dir = [
	I1024 19:20:46.625389  565581 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 19:20:46.625399  565581 command_runner.go:130] > # ]
	I1024 19:20:46.625411  565581 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 19:20:46.625427  565581 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 19:20:46.625440  565581 command_runner.go:130] > # its default mounts from the following two files:
	I1024 19:20:46.625449  565581 command_runner.go:130] > #
	I1024 19:20:46.625464  565581 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 19:20:46.625475  565581 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 19:20:46.625487  565581 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 19:20:46.625501  565581 command_runner.go:130] > #
	I1024 19:20:46.625517  565581 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 19:20:46.625532  565581 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 19:20:46.625547  565581 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 19:20:46.625560  565581 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 19:20:46.625584  565581 command_runner.go:130] > #
	I1024 19:20:46.625596  565581 command_runner.go:130] > # default_mounts_file = ""
	I1024 19:20:46.625610  565581 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 19:20:46.625625  565581 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 19:20:46.625636  565581 command_runner.go:130] > # pids_limit = 0
	I1024 19:20:46.625651  565581 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 19:20:46.625663  565581 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 19:20:46.625675  565581 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 19:20:46.625697  565581 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 19:20:46.625709  565581 command_runner.go:130] > # log_size_max = -1
	I1024 19:20:46.625725  565581 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 19:20:46.625736  565581 command_runner.go:130] > # log_to_journald = false
	I1024 19:20:46.625749  565581 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 19:20:46.625763  565581 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 19:20:46.625777  565581 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 19:20:46.625790  565581 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 19:20:46.625804  565581 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 19:20:46.625815  565581 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 19:20:46.625830  565581 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 19:20:46.625839  565581 command_runner.go:130] > # read_only = false
	I1024 19:20:46.625852  565581 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 19:20:46.625872  565581 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 19:20:46.625883  565581 command_runner.go:130] > # live configuration reload.
	I1024 19:20:46.625895  565581 command_runner.go:130] > # log_level = "info"
	I1024 19:20:46.625909  565581 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 19:20:46.625922  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:20:46.625932  565581 command_runner.go:130] > # log_filter = ""
	I1024 19:20:46.625942  565581 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 19:20:46.625960  565581 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 19:20:46.625972  565581 command_runner.go:130] > # separated by comma.
	I1024 19:20:46.625980  565581 command_runner.go:130] > # uid_mappings = ""
	I1024 19:20:46.625998  565581 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 19:20:46.626012  565581 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 19:20:46.626024  565581 command_runner.go:130] > # separated by comma.
	I1024 19:20:46.626035  565581 command_runner.go:130] > # gid_mappings = ""
	I1024 19:20:46.626046  565581 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 19:20:46.626060  565581 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:20:46.626084  565581 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:20:46.626096  565581 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 19:20:46.626111  565581 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 19:20:46.626125  565581 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:20:46.626140  565581 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:20:46.626149  565581 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 19:20:46.626161  565581 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 19:20:46.626175  565581 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 19:20:46.626195  565581 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 19:20:46.626206  565581 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 19:20:46.626221  565581 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 19:20:46.626237  565581 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 19:20:46.626252  565581 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 19:20:46.626265  565581 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 19:20:46.626277  565581 command_runner.go:130] > # drop_infra_ctr = true
	I1024 19:20:46.626298  565581 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 19:20:46.626312  565581 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 19:20:46.626327  565581 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 19:20:46.626335  565581 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 19:20:46.626346  565581 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 19:20:46.626358  565581 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 19:20:46.626372  565581 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 19:20:46.626388  565581 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 19:20:46.626399  565581 command_runner.go:130] > # pinns_path = ""
	I1024 19:20:46.626414  565581 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 19:20:46.626426  565581 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 19:20:46.626440  565581 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 19:20:46.626453  565581 command_runner.go:130] > # default_runtime = "runc"
	I1024 19:20:46.626463  565581 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 19:20:46.626480  565581 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 19:20:46.626502  565581 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 19:20:46.626516  565581 command_runner.go:130] > # creation as a file is not desired either.
	I1024 19:20:46.626529  565581 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 19:20:46.626542  565581 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 19:20:46.626554  565581 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 19:20:46.626561  565581 command_runner.go:130] > # ]
	I1024 19:20:46.626576  565581 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 19:20:46.626592  565581 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 19:20:46.626607  565581 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 19:20:46.626621  565581 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 19:20:46.626627  565581 command_runner.go:130] > #
	I1024 19:20:46.626636  565581 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 19:20:46.626649  565581 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 19:20:46.626661  565581 command_runner.go:130] > #  runtime_type = "oci"
	I1024 19:20:46.626677  565581 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 19:20:46.626690  565581 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 19:20:46.626702  565581 command_runner.go:130] > #  allowed_annotations = []
	I1024 19:20:46.626710  565581 command_runner.go:130] > # Where:
	I1024 19:20:46.626741  565581 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 19:20:46.626762  565581 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 19:20:46.626778  565581 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 19:20:46.626793  565581 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 19:20:46.626804  565581 command_runner.go:130] > #   in $PATH.
	I1024 19:20:46.626819  565581 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 19:20:46.626828  565581 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 19:20:46.626842  565581 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 19:20:46.626854  565581 command_runner.go:130] > #   state.
	I1024 19:20:46.626871  565581 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 19:20:46.626886  565581 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 19:20:46.626900  565581 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 19:20:46.626914  565581 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 19:20:46.626925  565581 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 19:20:46.626939  565581 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 19:20:46.626953  565581 command_runner.go:130] > #   The currently recognized values are:
	I1024 19:20:46.626969  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 19:20:46.626986  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 19:20:46.627005  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 19:20:46.627019  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 19:20:46.627032  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 19:20:46.627047  565581 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 19:20:46.627062  565581 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 19:20:46.627078  565581 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 19:20:46.627091  565581 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 19:20:46.627102  565581 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 19:20:46.627111  565581 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1024 19:20:46.627121  565581 command_runner.go:130] > runtime_type = "oci"
	I1024 19:20:46.627132  565581 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 19:20:46.627145  565581 command_runner.go:130] > runtime_config_path = ""
	I1024 19:20:46.627156  565581 command_runner.go:130] > monitor_path = ""
	I1024 19:20:46.627167  565581 command_runner.go:130] > monitor_cgroup = ""
	I1024 19:20:46.627179  565581 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 19:20:46.627262  565581 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 19:20:46.627276  565581 command_runner.go:130] > # running containers
	I1024 19:20:46.627284  565581 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 19:20:46.627304  565581 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 19:20:46.627319  565581 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 19:20:46.627334  565581 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 19:20:46.627349  565581 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 19:20:46.627361  565581 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 19:20:46.627374  565581 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 19:20:46.627386  565581 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 19:20:46.627405  565581 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 19:20:46.627417  565581 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 19:20:46.627433  565581 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 19:20:46.627446  565581 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 19:20:46.627458  565581 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 19:20:46.627474  565581 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 19:20:46.627492  565581 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 19:20:46.627507  565581 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 19:20:46.627527  565581 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 19:20:46.627544  565581 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 19:20:46.627556  565581 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 19:20:46.627574  565581 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 19:20:46.627586  565581 command_runner.go:130] > # Example:
	I1024 19:20:46.627595  565581 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 19:20:46.627608  565581 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 19:20:46.627621  565581 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 19:20:46.627634  565581 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 19:20:46.627644  565581 command_runner.go:130] > # cpuset = 0
	I1024 19:20:46.627654  565581 command_runner.go:130] > # cpushares = "0-1"
	I1024 19:20:46.627661  565581 command_runner.go:130] > # Where:
	I1024 19:20:46.627670  565581 command_runner.go:130] > # The workload name is workload-type.
	I1024 19:20:46.627687  565581 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 19:20:46.627701  565581 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 19:20:46.627715  565581 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 19:20:46.627732  565581 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 19:20:46.627746  565581 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 19:20:46.627756  565581 command_runner.go:130] > # 
	I1024 19:20:46.627780  565581 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 19:20:46.627790  565581 command_runner.go:130] > #
	I1024 19:20:46.627808  565581 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 19:20:46.627823  565581 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 19:20:46.627834  565581 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 19:20:46.627848  565581 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 19:20:46.627868  565581 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 19:20:46.627879  565581 command_runner.go:130] > [crio.image]
	I1024 19:20:46.627895  565581 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 19:20:46.627907  565581 command_runner.go:130] > # default_transport = "docker://"
	I1024 19:20:46.627931  565581 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 19:20:46.627943  565581 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:20:46.627954  565581 command_runner.go:130] > # global_auth_file = ""
	I1024 19:20:46.627968  565581 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 19:20:46.627981  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:20:46.627993  565581 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 19:20:46.628008  565581 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 19:20:46.628019  565581 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:20:46.628029  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:20:46.628041  565581 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 19:20:46.628059  565581 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 19:20:46.628073  565581 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 19:20:46.628087  565581 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 19:20:46.628100  565581 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 19:20:46.628108  565581 command_runner.go:130] > # pause_command = "/pause"
	I1024 19:20:46.628118  565581 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 19:20:46.628133  565581 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 19:20:46.628148  565581 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 19:20:46.628162  565581 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 19:20:46.628176  565581 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 19:20:46.628187  565581 command_runner.go:130] > # signature_policy = ""
	I1024 19:20:46.628205  565581 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 19:20:46.628219  565581 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 19:20:46.628231  565581 command_runner.go:130] > # changing them here.
	I1024 19:20:46.628243  565581 command_runner.go:130] > # insecure_registries = [
	I1024 19:20:46.628271  565581 command_runner.go:130] > # ]
	I1024 19:20:46.628285  565581 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 19:20:46.628297  565581 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 19:20:46.628310  565581 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 19:20:46.628323  565581 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 19:20:46.628335  565581 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 19:20:46.628346  565581 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 19:20:46.628357  565581 command_runner.go:130] > # CNI plugins.
	I1024 19:20:46.628367  565581 command_runner.go:130] > [crio.network]
	I1024 19:20:46.628381  565581 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 19:20:46.628394  565581 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 19:20:46.628403  565581 command_runner.go:130] > # cni_default_network = ""
	I1024 19:20:46.628414  565581 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 19:20:46.628425  565581 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 19:20:46.628439  565581 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 19:20:46.628450  565581 command_runner.go:130] > # plugin_dirs = [
	I1024 19:20:46.628461  565581 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 19:20:46.628471  565581 command_runner.go:130] > # ]
	I1024 19:20:46.628485  565581 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 19:20:46.628495  565581 command_runner.go:130] > [crio.metrics]
	I1024 19:20:46.628506  565581 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 19:20:46.628517  565581 command_runner.go:130] > # enable_metrics = false
	I1024 19:20:46.628529  565581 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 19:20:46.628542  565581 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 19:20:46.628553  565581 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 19:20:46.628568  565581 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 19:20:46.628585  565581 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 19:20:46.628596  565581 command_runner.go:130] > # metrics_collectors = [
	I1024 19:20:46.628606  565581 command_runner.go:130] > # 	"operations",
	I1024 19:20:46.628615  565581 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 19:20:46.628625  565581 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 19:20:46.628637  565581 command_runner.go:130] > # 	"operations_errors",
	I1024 19:20:46.628646  565581 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 19:20:46.628658  565581 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 19:20:46.628670  565581 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 19:20:46.628681  565581 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 19:20:46.628692  565581 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 19:20:46.628702  565581 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 19:20:46.628709  565581 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 19:20:46.628719  565581 command_runner.go:130] > # 	"containers_oom_total",
	I1024 19:20:46.628731  565581 command_runner.go:130] > # 	"containers_oom",
	I1024 19:20:46.628743  565581 command_runner.go:130] > # 	"processes_defunct",
	I1024 19:20:46.628753  565581 command_runner.go:130] > # 	"operations_total",
	I1024 19:20:46.628765  565581 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 19:20:46.628791  565581 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 19:20:46.628804  565581 command_runner.go:130] > # 	"operations_errors_total",
	I1024 19:20:46.628812  565581 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 19:20:46.628836  565581 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 19:20:46.628847  565581 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 19:20:46.628859  565581 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 19:20:46.628871  565581 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 19:20:46.628879  565581 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 19:20:46.628885  565581 command_runner.go:130] > # ]
	I1024 19:20:46.628900  565581 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 19:20:46.628911  565581 command_runner.go:130] > # metrics_port = 9090
	I1024 19:20:46.628923  565581 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 19:20:46.628934  565581 command_runner.go:130] > # metrics_socket = ""
	I1024 19:20:46.628949  565581 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 19:20:46.628960  565581 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 19:20:46.628972  565581 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 19:20:46.628984  565581 command_runner.go:130] > # certificate on any modification event.
	I1024 19:20:46.628995  565581 command_runner.go:130] > # metrics_cert = ""
	I1024 19:20:46.629008  565581 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 19:20:46.629019  565581 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 19:20:46.629030  565581 command_runner.go:130] > # metrics_key = ""
	I1024 19:20:46.629042  565581 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 19:20:46.629053  565581 command_runner.go:130] > [crio.tracing]
	I1024 19:20:46.629066  565581 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 19:20:46.629078  565581 command_runner.go:130] > # enable_tracing = false
	I1024 19:20:46.629088  565581 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 19:20:46.629099  565581 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 19:20:46.629111  565581 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 19:20:46.629122  565581 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 19:20:46.629136  565581 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 19:20:46.629145  565581 command_runner.go:130] > [crio.stats]
	I1024 19:20:46.629159  565581 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 19:20:46.629172  565581 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 19:20:46.629184  565581 command_runner.go:130] > # stats_collection_period = 0
	I1024 19:20:46.629308  565581 cni.go:84] Creating CNI manager for ""
	I1024 19:20:46.629324  565581 cni.go:136] 1 nodes found, recommending kindnet
	I1024 19:20:46.629349  565581 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:20:46.629381  565581 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-961484 NodeName:multinode-961484 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:20:46.629586  565581 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-961484"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:20:46.629689  565581 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-961484 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-961484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:20:46.629772  565581 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:20:46.639224  565581 command_runner.go:130] > kubeadm
	I1024 19:20:46.639249  565581 command_runner.go:130] > kubectl
	I1024 19:20:46.639253  565581 command_runner.go:130] > kubelet
	I1024 19:20:46.639274  565581 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:20:46.639342  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:20:46.647613  565581 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1024 19:20:46.664003  565581 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:20:46.680873  565581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1024 19:20:46.697553  565581 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:20:46.701203  565581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:20:46.712925  565581 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484 for IP: 192.168.58.2
	I1024 19:20:46.712969  565581 certs.go:190] acquiring lock for shared ca certs: {Name:mkd071e4924662af2a94ad3f2018330ff8506826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:46.713226  565581 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key
	I1024 19:20:46.713295  565581 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key
	I1024 19:20:46.713363  565581 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key
	I1024 19:20:46.713391  565581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt with IP's: []
	I1024 19:20:46.945598  565581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt ...
	I1024 19:20:46.945632  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt: {Name:mk160735beaf5fb0f2033353efe423b45e6bb1a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:46.945803  565581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key ...
	I1024 19:20:46.945812  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key: {Name:mk0336a600766fdc685c5e7debc389e0aec54134 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:46.945884  565581 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.key.cee25041
	I1024 19:20:46.945897  565581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:20:47.035522  565581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.crt.cee25041 ...
	I1024 19:20:47.035562  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.crt.cee25041: {Name:mkdbcab3dc7864e83766cff13c1320af7c951a81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:47.035745  565581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.key.cee25041 ...
	I1024 19:20:47.035759  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.key.cee25041: {Name:mk2595e9dcf98995991d51b9f6556d32e3c6c6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:47.035850  565581 certs.go:337] copying /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.crt
	I1024 19:20:47.035949  565581 certs.go:341] copying /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.key
	I1024 19:20:47.036011  565581 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.key
	I1024 19:20:47.036022  565581 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.crt with IP's: []
	I1024 19:20:47.150012  565581 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.crt ...
	I1024 19:20:47.150137  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.crt: {Name:mk611cf27279e88a9165de062af3bdc126d576a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:47.150336  565581 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.key ...
	I1024 19:20:47.150347  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.key: {Name:mk674a6bbaa9300f40dee2e7a5d8b16f90ffd638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:20:47.150418  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 19:20:47.150447  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 19:20:47.150460  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 19:20:47.150473  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 19:20:47.150486  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:20:47.150496  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:20:47.150508  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:20:47.150520  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:20:47.150568  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem (1338 bytes)
	W1024 19:20:47.150602  565581 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323_empty.pem, impossibly tiny 0 bytes
	I1024 19:20:47.150613  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:20:47.150633  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:20:47.150666  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:20:47.150694  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem (1675 bytes)
	I1024 19:20:47.150741  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:20:47.150770  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> /usr/share/ca-certificates/4783232.pem
	I1024 19:20:47.150782  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:20:47.150794  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem -> /usr/share/ca-certificates/478323.pem
	I1024 19:20:47.151421  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:20:47.179480  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1024 19:20:47.203669  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:20:47.228349  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 19:20:47.250614  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:20:47.274000  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1024 19:20:47.300406  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:20:47.327177  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:20:47.353319  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /usr/share/ca-certificates/4783232.pem (1708 bytes)
	I1024 19:20:47.380819  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:20:47.404856  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem --> /usr/share/ca-certificates/478323.pem (1338 bytes)
	I1024 19:20:47.431502  565581 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:20:47.449897  565581 ssh_runner.go:195] Run: openssl version
	I1024 19:20:47.455908  565581 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1024 19:20:47.456011  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783232.pem && ln -fs /usr/share/ca-certificates/4783232.pem /etc/ssl/certs/4783232.pem"
	I1024 19:20:47.467594  565581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783232.pem
	I1024 19:20:47.471369  565581 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:07 /usr/share/ca-certificates/4783232.pem
	I1024 19:20:47.471454  565581 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:07 /usr/share/ca-certificates/4783232.pem
	I1024 19:20:47.471505  565581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783232.pem
	I1024 19:20:47.478367  565581 command_runner.go:130] > 3ec20f2e
	I1024 19:20:47.478454  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783232.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:20:47.488526  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:20:47.498236  565581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:20:47.501830  565581 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:20:47.501894  565581 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:20:47.501939  565581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:20:47.508288  565581 command_runner.go:130] > b5213941
	I1024 19:20:47.508360  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:20:47.517626  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478323.pem && ln -fs /usr/share/ca-certificates/478323.pem /etc/ssl/certs/478323.pem"
	I1024 19:20:47.526951  565581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478323.pem
	I1024 19:20:47.530447  565581 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:07 /usr/share/ca-certificates/478323.pem
	I1024 19:20:47.530487  565581 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:07 /usr/share/ca-certificates/478323.pem
	I1024 19:20:47.530531  565581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478323.pem
	I1024 19:20:47.537084  565581 command_runner.go:130] > 51391683
	I1024 19:20:47.537149  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478323.pem /etc/ssl/certs/51391683.0"
	I1024 19:20:47.546689  565581 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:20:47.550089  565581 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:20:47.550160  565581 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:20:47.550202  565581 kubeadm.go:404] StartCluster: {Name:multinode-961484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-961484 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:20:47.550302  565581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:20:47.550352  565581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:20:47.590409  565581 cri.go:89] found id: ""
	I1024 19:20:47.590483  565581 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:20:47.599790  565581 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1024 19:20:47.599833  565581 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1024 19:20:47.599845  565581 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1024 19:20:47.599935  565581 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:20:47.609703  565581 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 19:20:47.609782  565581 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:20:47.620308  565581 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1024 19:20:47.620339  565581 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1024 19:20:47.620347  565581 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1024 19:20:47.620354  565581 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:20:47.620419  565581 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:20:47.620465  565581 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 19:20:47.713438  565581 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1024 19:20:47.713505  565581 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1024 19:20:47.792257  565581 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:20:47.792319  565581 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:20:57.261557  565581 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:20:57.261590  565581 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1024 19:20:57.261641  565581 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:20:57.261652  565581 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 19:20:57.261827  565581 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:20:57.261855  565581 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:20:57.261941  565581 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-gcp
	I1024 19:20:57.261960  565581 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-gcp
	I1024 19:20:57.262008  565581 kubeadm.go:322] OS: Linux
	I1024 19:20:57.262028  565581 command_runner.go:130] > OS: Linux
	I1024 19:20:57.262087  565581 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 19:20:57.262097  565581 command_runner.go:130] > CGROUPS_CPU: enabled
	I1024 19:20:57.262205  565581 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 19:20:57.262228  565581 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1024 19:20:57.262284  565581 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 19:20:57.262300  565581 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1024 19:20:57.262385  565581 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 19:20:57.262393  565581 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1024 19:20:57.262449  565581 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 19:20:57.262456  565581 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1024 19:20:57.262517  565581 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 19:20:57.262525  565581 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1024 19:20:57.262578  565581 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1024 19:20:57.262585  565581 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1024 19:20:57.262645  565581 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1024 19:20:57.262653  565581 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1024 19:20:57.262718  565581 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1024 19:20:57.262731  565581 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1024 19:20:57.262841  565581 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:20:57.262850  565581 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:20:57.262985  565581 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:20:57.262992  565581 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:20:57.263123  565581 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:20:57.263139  565581 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:20:57.263230  565581 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:20:57.265574  565581 out.go:204]   - Generating certificates and keys ...
	I1024 19:20:57.263405  565581 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:20:57.265718  565581 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:20:57.265741  565581 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1024 19:20:57.265850  565581 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:20:57.265860  565581 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1024 19:20:57.265964  565581 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:20:57.265975  565581 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:20:57.266045  565581 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:20:57.266054  565581 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:20:57.266134  565581 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:20:57.266143  565581 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1024 19:20:57.266214  565581 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:20:57.266218  565581 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1024 19:20:57.266266  565581 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:20:57.266287  565581 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1024 19:20:57.266502  565581 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-961484] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 19:20:57.266521  565581 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-961484] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 19:20:57.266597  565581 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:20:57.266610  565581 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1024 19:20:57.266781  565581 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-961484] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 19:20:57.266796  565581 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-961484] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 19:20:57.266923  565581 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:20:57.266946  565581 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:20:57.267072  565581 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:20:57.267087  565581 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:20:57.267151  565581 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:20:57.267162  565581 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1024 19:20:57.267245  565581 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:20:57.267269  565581 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:20:57.267350  565581 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:20:57.267356  565581 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:20:57.267424  565581 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:20:57.267432  565581 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:20:57.267503  565581 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:20:57.267508  565581 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:20:57.267568  565581 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:20:57.267573  565581 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:20:57.267654  565581 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:20:57.267662  565581 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:20:57.267738  565581 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:20:57.269799  565581 out.go:204]   - Booting up control plane ...
	I1024 19:20:57.267929  565581 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:20:57.269950  565581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:20:57.269979  565581 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:20:57.270159  565581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:20:57.270181  565581 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:20:57.270271  565581 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:20:57.270296  565581 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:20:57.270459  565581 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:20:57.270487  565581 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:20:57.270612  565581 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:20:57.270625  565581 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:20:57.270691  565581 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:20:57.270701  565581 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 19:20:57.270910  565581 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:20:57.270922  565581 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:20:57.270997  565581 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.503324 seconds
	I1024 19:20:57.271005  565581 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.503324 seconds
	I1024 19:20:57.271219  565581 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:20:57.271266  565581 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:20:57.271449  565581 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:20:57.271483  565581 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:20:57.271581  565581 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:20:57.271605  565581 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:20:57.271939  565581 kubeadm.go:322] [mark-control-plane] Marking the node multinode-961484 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:20:57.271979  565581 command_runner.go:130] > [mark-control-plane] Marking the node multinode-961484 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:20:57.272080  565581 kubeadm.go:322] [bootstrap-token] Using token: ftkzko.hmkbgdk383w593xj
	I1024 19:20:57.274095  565581 out.go:204]   - Configuring RBAC rules ...
	I1024 19:20:57.272170  565581 command_runner.go:130] > [bootstrap-token] Using token: ftkzko.hmkbgdk383w593xj
	I1024 19:20:57.274191  565581 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:20:57.274202  565581 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:20:57.274289  565581 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:20:57.274306  565581 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:20:57.274548  565581 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:20:57.274573  565581 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:20:57.274697  565581 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:20:57.274722  565581 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:20:57.274893  565581 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:20:57.274921  565581 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:20:57.275142  565581 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:20:57.275191  565581 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:20:57.275352  565581 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:20:57.275370  565581 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:20:57.275414  565581 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:20:57.275425  565581 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1024 19:20:57.275461  565581 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:20:57.275467  565581 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1024 19:20:57.275471  565581 kubeadm.go:322] 
	I1024 19:20:57.275549  565581 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:20:57.275568  565581 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1024 19:20:57.275587  565581 kubeadm.go:322] 
	I1024 19:20:57.275683  565581 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:20:57.275691  565581 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1024 19:20:57.275701  565581 kubeadm.go:322] 
	I1024 19:20:57.275722  565581 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:20:57.275728  565581 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1024 19:20:57.275802  565581 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:20:57.275816  565581 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:20:57.275879  565581 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:20:57.275891  565581 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:20:57.275897  565581 kubeadm.go:322] 
	I1024 19:20:57.275977  565581 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:20:57.275988  565581 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1024 19:20:57.275994  565581 kubeadm.go:322] 
	I1024 19:20:57.276047  565581 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:20:57.276058  565581 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:20:57.276063  565581 kubeadm.go:322] 
	I1024 19:20:57.276117  565581 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:20:57.276131  565581 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1024 19:20:57.276218  565581 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:20:57.276227  565581 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:20:57.276322  565581 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:20:57.276332  565581 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:20:57.276337  565581 kubeadm.go:322] 
	I1024 19:20:57.276445  565581 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:20:57.276465  565581 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:20:57.276572  565581 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:20:57.276582  565581 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1024 19:20:57.276588  565581 kubeadm.go:322] 
	I1024 19:20:57.276697  565581 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ftkzko.hmkbgdk383w593xj \
	I1024 19:20:57.276719  565581 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ftkzko.hmkbgdk383w593xj \
	I1024 19:20:57.276903  565581 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 \
	I1024 19:20:57.276922  565581 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 \
	I1024 19:20:57.276960  565581 kubeadm.go:322] 	--control-plane 
	I1024 19:20:57.276980  565581 command_runner.go:130] > 	--control-plane 
	I1024 19:20:57.276986  565581 kubeadm.go:322] 
	I1024 19:20:57.277128  565581 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:20:57.277154  565581 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:20:57.277171  565581 kubeadm.go:322] 
	I1024 19:20:57.277233  565581 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ftkzko.hmkbgdk383w593xj \
	I1024 19:20:57.277242  565581 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ftkzko.hmkbgdk383w593xj \
	I1024 19:20:57.277331  565581 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 
	I1024 19:20:57.277353  565581 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 
	I1024 19:20:57.277398  565581 cni.go:84] Creating CNI manager for ""
	I1024 19:20:57.277405  565581 cni.go:136] 1 nodes found, recommending kindnet
	I1024 19:20:57.280946  565581 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:20:57.282895  565581 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:20:57.345550  565581 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 19:20:57.345584  565581 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1024 19:20:57.345595  565581 command_runner.go:130] > Device: 37h/55d	Inode: 2849762     Links: 1
	I1024 19:20:57.345604  565581 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:20:57.345645  565581 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1024 19:20:57.345658  565581 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1024 19:20:57.345667  565581 command_runner.go:130] > Change: 2023-10-24 19:00:55.566952662 +0000
	I1024 19:20:57.345679  565581 command_runner.go:130] >  Birth: 2023-10-24 19:00:55.538949975 +0000
	I1024 19:20:57.345794  565581 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:20:57.345810  565581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:20:57.365405  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:20:58.208220  565581 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1024 19:20:58.208242  565581 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1024 19:20:58.208248  565581 command_runner.go:130] > serviceaccount/kindnet created
	I1024 19:20:58.208252  565581 command_runner.go:130] > daemonset.apps/kindnet created
	I1024 19:20:58.208301  565581 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:20:58.208395  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:20:58.208423  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=multinode-961484 minikube.k8s.io/updated_at=2023_10_24T19_20_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:20:58.215515  565581 command_runner.go:130] > -16
	I1024 19:20:58.352934  565581 command_runner.go:130] > node/multinode-961484 labeled
	I1024 19:20:58.356613  565581 ops.go:34] apiserver oom_adj: -16
	I1024 19:20:58.356692  565581 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1024 19:20:58.356855  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:20:58.471386  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:20:58.471479  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:20:58.538018  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:20:59.041871  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:20:59.107798  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:20:59.542084  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:20:59.618480  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:00.042214  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:00.122018  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:00.541550  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:00.621932  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:01.041831  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:01.112324  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:01.542049  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:01.611715  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:02.042092  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:02.121249  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:02.541891  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:02.621655  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:03.042052  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:03.114604  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:03.541532  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:03.618843  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:04.041564  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:04.121382  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:04.542024  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:04.622342  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:05.041193  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:05.114368  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:05.542109  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:05.619187  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:06.041850  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:06.113406  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:06.542093  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:06.621409  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:07.041742  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:07.112273  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:07.541291  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:07.612139  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:08.041849  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:08.115428  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:08.542280  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:08.615089  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:09.041572  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:09.122422  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:09.542140  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:09.611115  565581 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 19:21:10.041813  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:21:10.371844  565581 command_runner.go:130] > NAME      SECRETS   AGE
	I1024 19:21:10.371878  565581 command_runner.go:130] > default   0         0s
	I1024 19:21:10.371913  565581 kubeadm.go:1081] duration metric: took 12.163595426s to wait for elevateKubeSystemPrivileges.
	I1024 19:21:10.371934  565581 kubeadm.go:406] StartCluster complete in 22.821734691s
	I1024 19:21:10.371962  565581 settings.go:142] acquiring lock: {Name:mk9f191a52d3ce53608a65d0f0798312edc39465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:21:10.372059  565581 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:21:10.373267  565581 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/kubeconfig: {Name:mkcf54ea0dedcb61df1368dce9070a6aebbbad94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:21:10.373633  565581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:21:10.373789  565581 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:21:10.373886  565581 addons.go:69] Setting storage-provisioner=true in profile "multinode-961484"
	I1024 19:21:10.373894  565581 addons.go:69] Setting default-storageclass=true in profile "multinode-961484"
	I1024 19:21:10.373922  565581 addons.go:231] Setting addon storage-provisioner=true in "multinode-961484"
	I1024 19:21:10.373925  565581 config.go:182] Loaded profile config "multinode-961484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:21:10.373976  565581 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:21:10.373923  565581 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-961484"
	I1024 19:21:10.374057  565581 host.go:66] Checking if "multinode-961484" exists ...
	I1024 19:21:10.374347  565581 kapi.go:59] client config for multinode-961484: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:21:10.374792  565581 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:21:10.375109  565581 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:21:10.375499  565581 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:21:10.375906  565581 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:21:10.375932  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:10.375945  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:10.375954  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:10.388392  565581 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1024 19:21:10.388435  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:10.388443  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:10 GMT
	I1024 19:21:10.388448  565581 round_trippers.go:580]     Audit-Id: a1737865-7995-427c-9b15-c35de9ca94b4
	I1024 19:21:10.388454  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:10.388459  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:10.388464  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:10.388469  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:10.388475  565581 round_trippers.go:580]     Content-Length: 291
	I1024 19:21:10.388524  565581 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"72307e84-17f5-44e0-9f8d-7067b45ba693","resourceVersion":"345","creationTimestamp":"2023-10-24T19:20:57Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1024 19:21:10.388965  565581 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"72307e84-17f5-44e0-9f8d-7067b45ba693","resourceVersion":"345","creationTimestamp":"2023-10-24T19:20:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1024 19:21:10.389024  565581 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:21:10.389031  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:10.389038  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:10.389047  565581 round_trippers.go:473]     Content-Type: application/json
	I1024 19:21:10.389055  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:10.402259  565581 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:21:10.399352  565581 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:21:10.404046  565581 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:21:10.404066  565581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:21:10.404118  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:21:10.404184  565581 kapi.go:59] client config for multinode-961484: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:21:10.404438  565581 addons.go:231] Setting addon default-storageclass=true in "multinode-961484"
	I1024 19:21:10.404482  565581 host.go:66] Checking if "multinode-961484" exists ...
	I1024 19:21:10.404902  565581 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:21:10.422237  565581 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:21:10.422265  565581 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:21:10.422330  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:21:10.423369  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:21:10.439452  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:21:10.444993  565581 round_trippers.go:574] Response Status: 200 OK in 55 milliseconds
	I1024 19:21:10.445028  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:10.445040  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:10 GMT
	I1024 19:21:10.445050  565581 round_trippers.go:580]     Audit-Id: 62892604-b42f-40cc-baa8-eb0dee4c093a
	I1024 19:21:10.445059  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:10.445067  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:10.445074  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:10.445083  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:10.445096  565581 round_trippers.go:580]     Content-Length: 291
	I1024 19:21:10.445140  565581 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"72307e84-17f5-44e0-9f8d-7067b45ba693","resourceVersion":"346","creationTimestamp":"2023-10-24T19:20:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1024 19:21:10.445330  565581 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:21:10.445345  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:10.445355  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:10.445365  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:10.448151  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:10.448186  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:10.448198  565581 round_trippers.go:580]     Content-Length: 291
	I1024 19:21:10.448207  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:10 GMT
	I1024 19:21:10.448222  565581 round_trippers.go:580]     Audit-Id: bafed2bf-faac-45d1-a4a8-75a24bb63df8
	I1024 19:21:10.448231  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:10.448241  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:10.448249  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:10.448261  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:10.448292  565581 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"72307e84-17f5-44e0-9f8d-7067b45ba693","resourceVersion":"346","creationTimestamp":"2023-10-24T19:20:57Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1024 19:21:10.448398  565581 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-961484" context rescaled to 1 replicas
	I1024 19:21:10.448433  565581 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:21:10.450481  565581 out.go:177] * Verifying Kubernetes components...
	I1024 19:21:10.452080  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:21:10.597327  565581 command_runner.go:130] > apiVersion: v1
	I1024 19:21:10.597364  565581 command_runner.go:130] > data:
	I1024 19:21:10.597375  565581 command_runner.go:130] >   Corefile: |
	I1024 19:21:10.597384  565581 command_runner.go:130] >     .:53 {
	I1024 19:21:10.597395  565581 command_runner.go:130] >         errors
	I1024 19:21:10.597404  565581 command_runner.go:130] >         health {
	I1024 19:21:10.597410  565581 command_runner.go:130] >            lameduck 5s
	I1024 19:21:10.597415  565581 command_runner.go:130] >         }
	I1024 19:21:10.597420  565581 command_runner.go:130] >         ready
	I1024 19:21:10.597434  565581 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1024 19:21:10.597443  565581 command_runner.go:130] >            pods insecure
	I1024 19:21:10.597454  565581 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1024 19:21:10.597463  565581 command_runner.go:130] >            ttl 30
	I1024 19:21:10.597468  565581 command_runner.go:130] >         }
	I1024 19:21:10.597476  565581 command_runner.go:130] >         prometheus :9153
	I1024 19:21:10.597483  565581 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1024 19:21:10.597492  565581 command_runner.go:130] >            max_concurrent 1000
	I1024 19:21:10.597499  565581 command_runner.go:130] >         }
	I1024 19:21:10.597504  565581 command_runner.go:130] >         cache 30
	I1024 19:21:10.597511  565581 command_runner.go:130] >         loop
	I1024 19:21:10.597516  565581 command_runner.go:130] >         reload
	I1024 19:21:10.597524  565581 command_runner.go:130] >         loadbalance
	I1024 19:21:10.597529  565581 command_runner.go:130] >     }
	I1024 19:21:10.597537  565581 command_runner.go:130] > kind: ConfigMap
	I1024 19:21:10.597542  565581 command_runner.go:130] > metadata:
	I1024 19:21:10.597552  565581 command_runner.go:130] >   creationTimestamp: "2023-10-24T19:20:57Z"
	I1024 19:21:10.597560  565581 command_runner.go:130] >   name: coredns
	I1024 19:21:10.597566  565581 command_runner.go:130] >   namespace: kube-system
	I1024 19:21:10.597573  565581 command_runner.go:130] >   resourceVersion: "232"
	I1024 19:21:10.597584  565581 command_runner.go:130] >   uid: dc7eb12f-4195-4971-95bf-820e156bfd43
	I1024 19:21:10.597817  565581 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:21:10.598133  565581 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:21:10.598417  565581 kapi.go:59] client config for multinode-961484: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:21:10.598677  565581 node_ready.go:35] waiting up to 6m0s for node "multinode-961484" to be "Ready" ...
	I1024 19:21:10.598790  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:10.598796  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:10.598804  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:10.598810  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:10.644201  565581 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I1024 19:21:10.644289  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:10.644313  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:10.644327  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:10.644335  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:10.644345  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:10 GMT
	I1024 19:21:10.644353  565581 round_trippers.go:580]     Audit-Id: 32495688-8677-4697-9da2-0d7f6e1d069a
	I1024 19:21:10.644362  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:10.644499  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"310","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1024 19:21:10.645443  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:10.645520  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:10.645544  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:10.645565  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:10.648991  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:10.649080  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:10.649102  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:10.649124  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:10.649133  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:10.649150  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:10.649176  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:10 GMT
	I1024 19:21:10.649184  565581 round_trippers.go:580]     Audit-Id: 314eec34-2f62-4484-b911-96e49e1576cf
	I1024 19:21:10.649700  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"310","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1024 19:21:10.664971  565581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:21:10.667612  565581 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:21:11.150742  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:11.150768  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:11.150782  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:11.150792  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:11.153863  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:11.153911  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:11.153922  565581 round_trippers.go:580]     Audit-Id: da71c153-2dc1-43f7-a12e-78081d59a6fe
	I1024 19:21:11.153931  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:11.153939  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:11.153946  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:11.153954  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:11.153964  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:11 GMT
	I1024 19:21:11.154214  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"310","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1024 19:21:11.344716  565581 command_runner.go:130] > configmap/coredns replaced
	I1024 19:21:11.350027  565581 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1024 19:21:11.350096  565581 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1024 19:21:11.350271  565581 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1024 19:21:11.350289  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:11.350300  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:11.350309  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:11.353278  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:11.353376  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:11.353405  565581 round_trippers.go:580]     Content-Length: 1273
	I1024 19:21:11.353415  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:11 GMT
	I1024 19:21:11.353424  565581 round_trippers.go:580]     Audit-Id: 0aad640d-7912-4445-963a-47c9cdca8838
	I1024 19:21:11.353443  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:11.353456  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:11.353466  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:11.353474  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:11.353597  565581 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"361"},"items":[{"metadata":{"name":"standard","uid":"9d078d4b-9b15-4fc1-8f6d-7352b42ed596","resourceVersion":"359","creationTimestamp":"2023-10-24T19:21:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T19:21:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1024 19:21:11.354112  565581 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9d078d4b-9b15-4fc1-8f6d-7352b42ed596","resourceVersion":"359","creationTimestamp":"2023-10-24T19:21:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T19:21:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1024 19:21:11.354185  565581 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1024 19:21:11.354197  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:11.354208  565581 round_trippers.go:473]     Content-Type: application/json
	I1024 19:21:11.354221  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:11.354234  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:11.361871  565581 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1024 19:21:11.361902  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:11.361913  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:11.361921  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:11.361931  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:11.361939  565581 round_trippers.go:580]     Content-Length: 1220
	I1024 19:21:11.361960  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:11 GMT
	I1024 19:21:11.361969  565581 round_trippers.go:580]     Audit-Id: b0eafe21-d0c8-4ccc-842d-c49900e12be7
	I1024 19:21:11.361982  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:11.362404  565581 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"9d078d4b-9b15-4fc1-8f6d-7352b42ed596","resourceVersion":"359","creationTimestamp":"2023-10-24T19:21:11Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T19:21:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1024 19:21:11.603606  565581 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1024 19:21:11.609726  565581 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1024 19:21:11.617287  565581 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1024 19:21:11.650603  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:11.650629  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:11.650640  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:11.650648  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:11.650817  565581 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1024 19:21:11.653136  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:11.653159  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:11.653167  565581 round_trippers.go:580]     Audit-Id: 3a95bd78-615c-43a6-ac17-54bfae9a8524
	I1024 19:21:11.653173  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:11.653178  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:11.653183  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:11.653188  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:11.653195  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:11 GMT
	I1024 19:21:11.653386  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"310","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1024 19:21:11.661422  565581 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1024 19:21:11.676055  565581 command_runner.go:130] > pod/storage-provisioner created
	I1024 19:21:11.683690  565581 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.015965366s)
	I1024 19:21:11.686469  565581 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1024 19:21:11.688635  565581 addons.go:502] enable addons completed in 1.314859876s: enabled=[default-storageclass storage-provisioner]
	I1024 19:21:12.151318  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:12.151345  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:12.151357  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:12.151364  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:12.154350  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:12.154375  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:12.154382  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:12.154389  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:12.154394  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:12.154399  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:12.154405  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:12 GMT
	I1024 19:21:12.154410  565581 round_trippers.go:580]     Audit-Id: 9dbdda24-a288-4cb0-941d-55dc1beeb4c9
	I1024 19:21:12.154589  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"310","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I1024 19:21:12.651395  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:12.651427  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:12.651435  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:12.651441  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:12.654096  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:12.654124  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:12.654134  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:12.654140  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:12.654145  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:12.654151  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:12 GMT
	I1024 19:21:12.654156  565581 round_trippers.go:580]     Audit-Id: 9319e2e3-9ceb-462e-8bc0-24fb2347534f
	I1024 19:21:12.654161  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:12.654283  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:12.654593  565581 node_ready.go:49] node "multinode-961484" has status "Ready":"True"
	I1024 19:21:12.654610  565581 node_ready.go:38] duration metric: took 2.055894793s waiting for node "multinode-961484" to be "Ready" ...
	I1024 19:21:12.654627  565581 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:21:12.654702  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:21:12.654710  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:12.654717  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:12.654723  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:12.657711  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:12.657730  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:12.657737  565581 round_trippers.go:580]     Audit-Id: 9c6eb070-b5fb-4ac0-88d5-0a8c9f648cdc
	I1024 19:21:12.657743  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:12.657748  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:12.657754  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:12.657759  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:12.657765  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:12 GMT
	I1024 19:21:12.658285  565581 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"383"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"383","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54149 chars]
	I1024 19:21:12.661628  565581 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:12.661725  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:12.661737  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:12.661748  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:12.661762  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:12.663681  565581 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:21:12.663702  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:12.663711  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:12.663719  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:12.663725  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:12 GMT
	I1024 19:21:12.663733  565581 round_trippers.go:580]     Audit-Id: 1f9cb7d0-af69-4afc-9474-d1d5ea01771e
	I1024 19:21:12.663742  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:12.663754  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:12.663893  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"383","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1024 19:21:12.664346  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:12.664360  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:12.664368  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:12.664374  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:12.666475  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:12.666500  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:12.666511  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:12 GMT
	I1024 19:21:12.666520  565581 round_trippers.go:580]     Audit-Id: 505c233e-2981-49fd-9c58-85cf08749a2b
	I1024 19:21:12.666529  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:12.666538  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:12.666546  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:12.666557  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:12.666734  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:12.667323  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:12.667350  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:12.667363  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:12.667376  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:12.669844  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:12.669868  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:12.669878  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:12.669885  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:12.669890  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:12.669895  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:12.669900  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:12 GMT
	I1024 19:21:12.669906  565581 round_trippers.go:580]     Audit-Id: 17803526-0cf3-439a-947b-db2d7746fc2b
	I1024 19:21:12.670077  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"383","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1024 19:21:12.670543  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:12.670557  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:12.670564  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:12.670570  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:12.672876  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:12.672899  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:12.672908  565581 round_trippers.go:580]     Audit-Id: b659c535-6e12-43ce-8c5d-724e2ecbc908
	I1024 19:21:12.672914  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:12.672922  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:12.672931  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:12.672944  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:12.672957  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:12 GMT
	I1024 19:21:12.673091  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:13.174078  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:13.174110  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:13.174122  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:13.174131  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:13.180324  565581 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1024 19:21:13.180360  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:13.180416  565581 round_trippers.go:580]     Audit-Id: 40bc6428-12fb-48a0-a100-eb3bb9fd01fb
	I1024 19:21:13.180550  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:13.180616  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:13.180628  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:13.180640  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:13.180653  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:13 GMT
	I1024 19:21:13.181203  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"383","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1024 19:21:13.181913  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:13.181934  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:13.181946  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:13.181963  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:13.184906  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:13.184944  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:13.184955  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:13 GMT
	I1024 19:21:13.184961  565581 round_trippers.go:580]     Audit-Id: c138dcdb-b726-490f-86bd-049079eaabaf
	I1024 19:21:13.184966  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:13.184973  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:13.184979  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:13.184984  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:13.185179  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:13.674252  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:13.674274  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:13.674282  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:13.674289  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:13.676686  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:13.676711  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:13.676722  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:13 GMT
	I1024 19:21:13.676728  565581 round_trippers.go:580]     Audit-Id: 63409589-b766-491a-9922-3f2fff78c637
	I1024 19:21:13.676734  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:13.676739  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:13.676744  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:13.676751  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:13.676945  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:13.677396  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:13.677408  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:13.677415  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:13.677422  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:13.679378  565581 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:21:13.679399  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:13.679410  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:13.679419  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:13.679425  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:13 GMT
	I1024 19:21:13.679433  565581 round_trippers.go:580]     Audit-Id: 56b3fb6d-f720-43e0-aa12-52117a43ecec
	I1024 19:21:13.679439  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:13.679446  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:13.679586  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:14.174428  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:14.174455  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:14.174463  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:14.174469  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:14.176940  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:14.176970  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:14.176980  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:14.176988  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:14.177003  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:14 GMT
	I1024 19:21:14.177012  565581 round_trippers.go:580]     Audit-Id: e78b8e91-b499-4a29-af7f-147f9f14299c
	I1024 19:21:14.177020  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:14.177032  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:14.177166  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:14.177750  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:14.177766  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:14.177774  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:14.177782  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:14.179425  565581 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:21:14.179440  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:14.179446  565581 round_trippers.go:580]     Audit-Id: c0b78a14-d8f6-4904-85d6-9d5a7665fb72
	I1024 19:21:14.179452  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:14.179459  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:14.179479  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:14.179490  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:14.179502  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:14 GMT
	I1024 19:21:14.179603  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:14.674218  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:14.674248  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:14.674258  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:14.674267  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:14.676995  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:14.677038  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:14.677049  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:14.677058  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:14.677067  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:14 GMT
	I1024 19:21:14.677075  565581 round_trippers.go:580]     Audit-Id: 2d254aae-715d-48c8-b1cb-829fb989272e
	I1024 19:21:14.677085  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:14.677093  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:14.677239  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:14.677716  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:14.677728  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:14.677735  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:14.677741  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:14.679904  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:14.679921  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:14.679928  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:14.679933  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:14.679938  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:14.679943  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:14 GMT
	I1024 19:21:14.679964  565581 round_trippers.go:580]     Audit-Id: d9a7dd24-d1b1-4064-b5ce-63e97a548996
	I1024 19:21:14.679972  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:14.680239  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:14.680593  565581 pod_ready.go:102] pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace has status "Ready":"False"
	I1024 19:21:15.173888  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:15.173926  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:15.173935  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:15.173941  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:15.176822  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:15.176843  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:15.176850  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:15 GMT
	I1024 19:21:15.176856  565581 round_trippers.go:580]     Audit-Id: 5c739d2c-d5d0-4271-ab7e-3032e53d5829
	I1024 19:21:15.176861  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:15.176867  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:15.176872  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:15.176878  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:15.177070  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:15.177553  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:15.177565  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:15.177572  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:15.177578  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:15.179555  565581 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:21:15.179572  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:15.179578  565581 round_trippers.go:580]     Audit-Id: 776da27a-0795-423c-a30f-74ef5e77714b
	I1024 19:21:15.179584  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:15.179589  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:15.179594  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:15.179601  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:15.179609  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:15 GMT
	I1024 19:21:15.179774  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:15.674134  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:15.674164  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:15.674172  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:15.674185  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:15.678219  565581 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:21:15.678253  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:15.678261  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:15.678267  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:15.678272  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:15.678277  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:15.678283  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:15 GMT
	I1024 19:21:15.678288  565581 round_trippers.go:580]     Audit-Id: 6149b1cf-72d2-47e4-a767-9e28b5bab6d4
	I1024 19:21:15.678545  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:15.679212  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:15.679236  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:15.679245  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:15.679252  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:15.682126  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:15.682161  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:15.682172  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:15.682178  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:15.682184  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:15 GMT
	I1024 19:21:15.682189  565581 round_trippers.go:580]     Audit-Id: c89bfc36-e1e0-4140-9396-6233ce9adac6
	I1024 19:21:15.682196  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:15.682201  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:15.682311  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:16.174210  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:16.174380  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:16.174396  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:16.174404  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:16.178517  565581 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:21:16.178545  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:16.178552  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:16.178558  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:16.178563  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:16 GMT
	I1024 19:21:16.178568  565581 round_trippers.go:580]     Audit-Id: 3e5a00f9-79b6-4db6-9986-f3e1dd26919c
	I1024 19:21:16.178573  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:16.178579  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:16.178852  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:16.179595  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:16.179621  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:16.179633  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:16.179642  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:16.182414  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:16.182443  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:16.182455  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:16 GMT
	I1024 19:21:16.182464  565581 round_trippers.go:580]     Audit-Id: 1718f49a-f9f7-469e-9312-5b3b899d0f08
	I1024 19:21:16.182473  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:16.182483  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:16.182492  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:16.182501  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:16.182627  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:16.674058  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:16.674087  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:16.674096  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:16.674102  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:16.676842  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:16.676869  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:16.676876  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:16.676881  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:16.676887  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:16 GMT
	I1024 19:21:16.676893  565581 round_trippers.go:580]     Audit-Id: 7f6d651f-01a5-458d-8af3-ca62c4815aeb
	I1024 19:21:16.676900  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:16.676909  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:16.677178  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:16.677723  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:16.677736  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:16.677744  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:16.677750  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:16.680370  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:16.680395  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:16.680403  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:16.680409  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:16.680585  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:16.680694  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:16.680715  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:16 GMT
	I1024 19:21:16.680729  565581 round_trippers.go:580]     Audit-Id: 9d669f0f-f710-463e-9cc7-b6c9642e6d14
	I1024 19:21:16.680947  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:16.681376  565581 pod_ready.go:102] pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace has status "Ready":"False"
	I1024 19:21:17.174604  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:17.174631  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:17.174639  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:17.174645  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:17.177484  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:17.177509  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:17.177516  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:17.177521  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:17.177526  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:17 GMT
	I1024 19:21:17.177531  565581 round_trippers.go:580]     Audit-Id: 64a244cd-92f7-4dbe-a0d5-e6651a1a4877
	I1024 19:21:17.177536  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:17.177542  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:17.177698  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:17.178249  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:17.178262  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:17.178270  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:17.178275  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:17.180280  565581 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:21:17.180298  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:17.180307  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:17.180315  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:17.180323  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:17 GMT
	I1024 19:21:17.180330  565581 round_trippers.go:580]     Audit-Id: e3218c0b-d00f-4e2d-ae29-7787116a44e8
	I1024 19:21:17.180338  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:17.180346  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:17.180437  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:17.674071  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:17.674100  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:17.674110  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:17.674116  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:17.677399  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:17.677440  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:17.677453  565581 round_trippers.go:580]     Audit-Id: bfe37860-ab9b-4f5b-8d17-9b0171e84c56
	I1024 19:21:17.677462  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:17.677469  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:17.677475  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:17.677481  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:17.677487  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:17 GMT
	I1024 19:21:17.677640  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:17.678284  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:17.678309  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:17.678322  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:17.678328  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:17.680687  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:17.680712  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:17.680722  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:17.680732  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:17.680743  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:17.680751  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:17 GMT
	I1024 19:21:17.680757  565581 round_trippers.go:580]     Audit-Id: 6165c3d2-dc12-410d-80ed-c9eb9f5d12dd
	I1024 19:21:17.680763  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:17.680902  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:18.174106  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:18.174142  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:18.174156  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:18.174167  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:18.176683  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:18.176711  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:18.176723  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:18.176732  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:18.176746  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:18.176755  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:18.176767  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:18 GMT
	I1024 19:21:18.176796  565581 round_trippers.go:580]     Audit-Id: c07c8cc3-09a6-4769-a598-00d51a737e68
	I1024 19:21:18.176951  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:18.177689  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:18.177706  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:18.177718  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:18.177730  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:18.180187  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:18.180205  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:18.180211  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:18 GMT
	I1024 19:21:18.180217  565581 round_trippers.go:580]     Audit-Id: c85891b2-854c-4b05-8e35-107407da725e
	I1024 19:21:18.180225  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:18.180234  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:18.180241  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:18.180249  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:18.180353  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:18.674369  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:18.674395  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:18.674404  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:18.674410  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:18.677694  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:18.677723  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:18.677733  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:18.677742  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:18.677750  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:18.677758  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:18.677765  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:18 GMT
	I1024 19:21:18.677772  565581 round_trippers.go:580]     Audit-Id: c6b0fcbb-89bb-4d90-b4a9-a13ed83f5cec
	I1024 19:21:18.678199  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:18.678774  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:18.678790  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:18.678798  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:18.678804  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:18.683021  565581 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:21:18.683061  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:18.683072  565581 round_trippers.go:580]     Audit-Id: b9f9141f-cce4-46bf-8ddf-5ad8a31c7291
	I1024 19:21:18.683078  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:18.683084  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:18.683090  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:18.683097  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:18.683103  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:18 GMT
	I1024 19:21:18.683283  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:18.683679  565581 pod_ready.go:102] pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace has status "Ready":"False"
	I1024 19:21:19.173780  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:19.173805  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:19.173814  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:19.173820  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:19.176753  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:19.176800  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:19.176811  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:19.176818  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:19.176826  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:19.176834  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:19 GMT
	I1024 19:21:19.176843  565581 round_trippers.go:580]     Audit-Id: 340229ee-7823-48a4-8983-a4ff63435450
	I1024 19:21:19.176852  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:19.177055  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:19.177532  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:19.177545  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:19.177552  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:19.177558  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:19.180502  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:19.180531  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:19.180540  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:19 GMT
	I1024 19:21:19.180549  565581 round_trippers.go:580]     Audit-Id: 3c41fcf1-8382-411e-a953-f31460d59f50
	I1024 19:21:19.180556  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:19.180565  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:19.180574  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:19.180584  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:19.180696  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:19.674090  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:19.674117  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:19.674125  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:19.674131  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:19.676633  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:19.676658  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:19.676669  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:19.676678  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:19.676685  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:19 GMT
	I1024 19:21:19.676690  565581 round_trippers.go:580]     Audit-Id: b72b9c37-4fbd-4dc4-8c97-d230cc011c02
	I1024 19:21:19.676699  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:19.676711  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:19.676858  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:19.677392  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:19.677408  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:19.677430  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:19.677436  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:19.679516  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:19.679534  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:19.679543  565581 round_trippers.go:580]     Audit-Id: f720d1a6-ccb9-4384-9688-f717cbfe0e81
	I1024 19:21:19.679549  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:19.679554  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:19.679560  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:19.679566  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:19.679571  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:19 GMT
	I1024 19:21:19.679660  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:20.174092  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:20.174139  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:20.174151  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:20.174161  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:20.177050  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:20.177090  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:20.177102  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:20.177111  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:20.177124  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:20.177132  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:20 GMT
	I1024 19:21:20.177137  565581 round_trippers.go:580]     Audit-Id: efc58b1a-6d7b-4017-9c01-595b7ae198b0
	I1024 19:21:20.177142  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:20.177285  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:20.178365  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:20.178388  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:20.178396  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:20.178402  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:20.181503  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:20.181607  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:20.181622  565581 round_trippers.go:580]     Audit-Id: 9276207a-a21b-4117-a772-3df9ea783fcb
	I1024 19:21:20.181636  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:20.181644  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:20.181722  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:20.181730  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:20.181736  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:20 GMT
	I1024 19:21:20.181893  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:20.674125  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:20.674320  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:20.674346  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:20.674357  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:20.677817  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:20.677849  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:20.677859  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:20.677866  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:20.677873  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:20 GMT
	I1024 19:21:20.677879  565581 round_trippers.go:580]     Audit-Id: 1b22063f-d998-4e1c-aa07-de85336008ee
	I1024 19:21:20.677905  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:20.677911  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:20.678094  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:20.678739  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:20.678756  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:20.678764  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:20.678769  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:20.681906  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:20.681927  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:20.681934  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:20.681940  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:20 GMT
	I1024 19:21:20.681945  565581 round_trippers.go:580]     Audit-Id: a0a7ed63-e6b0-44cb-ac84-af02830d1f0e
	I1024 19:21:20.681951  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:20.681956  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:20.681961  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:20.682153  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:21.173848  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:21.173879  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:21.173889  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:21.173895  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:21.176926  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:21.176955  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:21.176966  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:21.176974  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:21 GMT
	I1024 19:21:21.176983  565581 round_trippers.go:580]     Audit-Id: cdc37733-3107-4bf9-97fd-ab22d9c70ac2
	I1024 19:21:21.176991  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:21.176999  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:21.177066  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:21.177252  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:21.177935  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:21.177968  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:21.177976  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:21.177982  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:21.182410  565581 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:21:21.182438  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:21.182446  565581 round_trippers.go:580]     Audit-Id: 7531acbc-85ee-4502-828e-052975f626b9
	I1024 19:21:21.182452  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:21.182457  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:21.182463  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:21.182471  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:21.182481  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:21 GMT
	I1024 19:21:21.182654  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:21.183037  565581 pod_ready.go:102] pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace has status "Ready":"False"
	I1024 19:21:21.674175  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:21.674197  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:21.674206  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:21.674212  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:21.676566  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:21.676588  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:21.676595  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:21.676601  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:21.676606  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:21.676611  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:21 GMT
	I1024 19:21:21.676616  565581 round_trippers.go:580]     Audit-Id: f8c1fe58-e5e1-4075-88c1-e12c8f20db73
	I1024 19:21:21.676620  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:21.676869  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:21.677427  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:21.677441  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:21.677452  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:21.677460  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:21.679480  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:21.679506  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:21.679516  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:21.679524  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:21.679535  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:21.679547  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:21 GMT
	I1024 19:21:21.679556  565581 round_trippers.go:580]     Audit-Id: af3f2b7c-3787-4591-a2bb-9227b8b166b4
	I1024 19:21:21.679562  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:21.679662  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:22.174551  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:22.174597  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:22.174612  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:22.174624  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:22.178279  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:22.178317  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:22.178331  565581 round_trippers.go:580]     Audit-Id: 93c558d4-03bd-46e7-b934-4884ff894d6f
	I1024 19:21:22.178342  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:22.178352  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:22.178383  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:22.178398  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:22.178407  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:22 GMT
	I1024 19:21:22.178609  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:22.179236  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:22.179265  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:22.179273  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:22.179279  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:22.182328  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:22.182353  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:22.182426  565581 round_trippers.go:580]     Audit-Id: 3a41ab3f-2666-48d2-bcd2-d307aa259762
	I1024 19:21:22.182455  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:22.182464  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:22.182476  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:22.182489  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:22.182502  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:22 GMT
	I1024 19:21:22.182645  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:22.674068  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:22.674095  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:22.674104  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:22.674110  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:22.676893  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:22.676917  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:22.676929  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:22.676934  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:22 GMT
	I1024 19:21:22.676939  565581 round_trippers.go:580]     Audit-Id: 908e58e8-0007-4cd3-b12e-29a270780e0f
	I1024 19:21:22.677000  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:22.677019  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:22.677101  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:22.677392  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"390","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I1024 19:21:22.677963  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:22.677982  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:22.677990  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:22.677997  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:22.680974  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:22.681000  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:22.681065  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:22.681087  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:22.681093  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:22 GMT
	I1024 19:21:22.681104  565581 round_trippers.go:580]     Audit-Id: a626f1b9-2032-4686-8289-1aa1e8989eb8
	I1024 19:21:22.681109  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:22.681114  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:22.681302  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:23.173859  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:23.173955  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.173976  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.173987  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.177914  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:23.177942  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.177951  565581 round_trippers.go:580]     Audit-Id: 114c5521-7724-4c5d-abaa-c59577e6e2e6
	I1024 19:21:23.177957  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.177963  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.177969  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.177976  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.177985  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.178272  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"406","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1024 19:21:23.178837  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:23.178855  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.178863  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.178956  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.181965  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.181993  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.182000  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.182007  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.182108  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.182121  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.182129  565581 round_trippers.go:580]     Audit-Id: db7902aa-efee-443c-86c4-42c03a46cfe3
	I1024 19:21:23.182142  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.182281  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:23.182653  565581 pod_ready.go:92] pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:23.182675  565581 pod_ready.go:81] duration metric: took 10.521014773s waiting for pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.182685  565581 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.182759  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-961484
	I1024 19:21:23.182769  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.182776  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.182784  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.185558  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.185588  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.185599  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.185605  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.185611  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.185616  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.185622  565581 round_trippers.go:580]     Audit-Id: 5aac4fbb-20fe-460d-b40b-eeae777e705b
	I1024 19:21:23.185631  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.185774  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-961484","namespace":"kube-system","uid":"40e3cd85-c990-47c3-9b4f-3357407912b3","resourceVersion":"293","creationTimestamp":"2023-10-24T19:20:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a8dcbb037fe63d1a0a12d3fc24328a1e","kubernetes.io/config.mirror":"a8dcbb037fe63d1a0a12d3fc24328a1e","kubernetes.io/config.seen":"2023-10-24T19:20:50.668774383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1024 19:21:23.186238  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:23.186258  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.186265  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.186274  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.188643  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.188668  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.188677  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.188685  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.188692  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.188754  565581 round_trippers.go:580]     Audit-Id: d914eae4-3f07-42a2-bf8d-9d318f476373
	I1024 19:21:23.188783  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.188793  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.188926  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:23.189283  565581 pod_ready.go:92] pod "etcd-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:23.189301  565581 pod_ready.go:81] duration metric: took 6.608094ms waiting for pod "etcd-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.189318  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.189382  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-961484
	I1024 19:21:23.189392  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.189403  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.189413  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.191879  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.191901  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.191907  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.191913  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.191918  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.191923  565581 round_trippers.go:580]     Audit-Id: 1e03a9f2-2db2-42dc-8f70-b4ebb60a93e6
	I1024 19:21:23.191928  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.191933  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.192103  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-961484","namespace":"kube-system","uid":"ddaee20f-e0d6-4c4d-9f9e-455ef68f3c19","resourceVersion":"287","creationTimestamp":"2023-10-24T19:20:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4a9cd23fd8090ce7848f2d7b649f3664","kubernetes.io/config.mirror":"4a9cd23fd8090ce7848f2d7b649f3664","kubernetes.io/config.seen":"2023-10-24T19:20:57.153454574Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1024 19:21:23.192586  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:23.192602  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.192610  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.192616  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.195697  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:23.195728  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.195739  565581 round_trippers.go:580]     Audit-Id: 488387d6-3e63-4536-8946-9721e8a85646
	I1024 19:21:23.195746  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.195753  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.195760  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.195767  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.195775  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.196127  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:23.196551  565581 pod_ready.go:92] pod "kube-apiserver-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:23.196575  565581 pod_ready.go:81] duration metric: took 7.248376ms waiting for pod "kube-apiserver-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.196589  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.196678  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-961484
	I1024 19:21:23.196689  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.196700  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.196711  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.199719  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.199746  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.199756  565581 round_trippers.go:580]     Audit-Id: f04a9ac0-6fa3-4d51-9fcf-431fa7a2b0d7
	I1024 19:21:23.199764  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.199771  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.199777  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.199785  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.199792  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.199953  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-961484","namespace":"kube-system","uid":"6e58ec4f-71e0-4935-82f7-ea76ef7a7014","resourceVersion":"294","creationTimestamp":"2023-10-24T19:20:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"35e97d55ff71e17e9280e24931c7bc7f","kubernetes.io/config.mirror":"35e97d55ff71e17e9280e24931c7bc7f","kubernetes.io/config.seen":"2023-10-24T19:20:57.153464383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1024 19:21:23.200577  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:23.200607  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.200618  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.200625  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.203473  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.203498  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.203506  565581 round_trippers.go:580]     Audit-Id: da4c1213-8db7-4e4b-99be-367ffcf7f63a
	I1024 19:21:23.203515  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.203523  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.203532  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.203543  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.203550  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.203706  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:23.204165  565581 pod_ready.go:92] pod "kube-controller-manager-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:23.204186  565581 pod_ready.go:81] duration metric: took 7.588618ms waiting for pod "kube-controller-manager-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.204199  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-87vtd" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.204282  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87vtd
	I1024 19:21:23.204292  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.204299  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.204305  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.207275  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.207302  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.207312  565581 round_trippers.go:580]     Audit-Id: 888c7468-f679-4a13-81c1-95b5c7e21ab2
	I1024 19:21:23.207320  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.207327  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.207334  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.207341  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.207348  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.207485  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-87vtd","generateName":"kube-proxy-","namespace":"kube-system","uid":"dfc38cf1-7c84-476c-a1c6-dd1c81356cdb","resourceVersion":"376","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"60ac3a5f-4331-4153-af10-f224daecff07","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60ac3a5f-4331-4153-af10-f224daecff07\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1024 19:21:23.207959  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:23.207975  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.207986  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.207994  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.210622  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.210653  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.210665  565581 round_trippers.go:580]     Audit-Id: 8f3cfd6b-b619-4b10-99dd-690b7bbd3adc
	I1024 19:21:23.210673  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.210681  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.210689  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.210698  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.210705  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.210913  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:23.211397  565581 pod_ready.go:92] pod "kube-proxy-87vtd" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:23.211418  565581 pod_ready.go:81] duration metric: took 7.210985ms waiting for pod "kube-proxy-87vtd" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.211435  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.374898  565581 request.go:629] Waited for 163.354332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-961484
	I1024 19:21:23.374988  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-961484
	I1024 19:21:23.374994  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.375003  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.375012  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.377428  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.377448  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.377454  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.377460  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.377467  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.377475  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.377484  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.377493  565581 round_trippers.go:580]     Audit-Id: 5b92b3eb-4f95-4cae-aaa3-c1608dd98898
	I1024 19:21:23.377632  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-961484","namespace":"kube-system","uid":"2304ca9c-4994-4c85-8790-3e9e112351fd","resourceVersion":"284","creationTimestamp":"2023-10-24T19:20:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f8529e664309cdaf0d05b1249def38ec","kubernetes.io/config.mirror":"f8529e664309cdaf0d05b1249def38ec","kubernetes.io/config.seen":"2023-10-24T19:20:57.153466244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1024 19:21:23.574450  565581 request.go:629] Waited for 196.318323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:23.574975  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:23.574992  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.575016  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.575104  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.577873  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:23.577898  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.577906  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.577911  565581 round_trippers.go:580]     Audit-Id: b008587b-873b-4882-a186-befeae1196e6
	I1024 19:21:23.577917  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.577922  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.577927  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.577932  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.578143  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:23.578477  565581 pod_ready.go:92] pod "kube-scheduler-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:23.578493  565581 pod_ready.go:81] duration metric: took 367.04972ms waiting for pod "kube-scheduler-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:23.578509  565581 pod_ready.go:38] duration metric: took 10.923851054s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:21:23.578529  565581 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:21:23.578579  565581 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:21:23.590488  565581 command_runner.go:130] > 1420
	I1024 19:21:23.590536  565581 api_server.go:72] duration metric: took 13.142060489s to wait for apiserver process to appear ...
	I1024 19:21:23.590550  565581 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:21:23.590573  565581 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1024 19:21:23.595826  565581 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1024 19:21:23.595914  565581 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1024 19:21:23.595922  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.595933  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.595954  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.597455  565581 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:21:23.597482  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.597489  565581 round_trippers.go:580]     Audit-Id: 2bbecf28-6485-4cc4-bf9f-9c82d9055324
	I1024 19:21:23.597494  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.597500  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.597505  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.597510  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.597518  565581 round_trippers.go:580]     Content-Length: 264
	I1024 19:21:23.597526  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.597548  565581 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1024 19:21:23.597659  565581 api_server.go:141] control plane version: v1.28.3
	I1024 19:21:23.597683  565581 api_server.go:131] duration metric: took 7.12649ms to wait for apiserver health ...
	I1024 19:21:23.597692  565581 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:21:23.774208  565581 request.go:629] Waited for 176.419048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:21:23.774276  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:21:23.774281  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.774289  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.774301  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.778166  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:23.778208  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.778219  565581 round_trippers.go:580]     Audit-Id: 3c1f33f8-53eb-4633-ad93-af43a005a33f
	I1024 19:21:23.778226  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.778233  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.778240  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.778247  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.778254  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.778908  565581 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"410"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"406","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1024 19:21:23.781191  565581 system_pods.go:59] 8 kube-system pods found
	I1024 19:21:23.781222  565581 system_pods.go:61] "coredns-5dd5756b68-wgdhw" [fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7] Running
	I1024 19:21:23.781227  565581 system_pods.go:61] "etcd-multinode-961484" [40e3cd85-c990-47c3-9b4f-3357407912b3] Running
	I1024 19:21:23.781231  565581 system_pods.go:61] "kindnet-zgn88" [a26cc577-13fe-45ab-9899-365498d67e7e] Running
	I1024 19:21:23.781237  565581 system_pods.go:61] "kube-apiserver-multinode-961484" [ddaee20f-e0d6-4c4d-9f9e-455ef68f3c19] Running
	I1024 19:21:23.781242  565581 system_pods.go:61] "kube-controller-manager-multinode-961484" [6e58ec4f-71e0-4935-82f7-ea76ef7a7014] Running
	I1024 19:21:23.781246  565581 system_pods.go:61] "kube-proxy-87vtd" [dfc38cf1-7c84-476c-a1c6-dd1c81356cdb] Running
	I1024 19:21:23.781249  565581 system_pods.go:61] "kube-scheduler-multinode-961484" [2304ca9c-4994-4c85-8790-3e9e112351fd] Running
	I1024 19:21:23.781253  565581 system_pods.go:61] "storage-provisioner" [6ae1e99e-0a67-49f4-b89b-b708d36767cb] Running
	I1024 19:21:23.781260  565581 system_pods.go:74] duration metric: took 183.562208ms to wait for pod list to return data ...
	I1024 19:21:23.781268  565581 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:21:23.974740  565581 request.go:629] Waited for 193.392688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:21:23.974831  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:21:23.974838  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:23.974850  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:23.974862  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:23.982088  565581 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1024 19:21:23.982122  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:23.982134  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:23.982139  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:23.982145  565581 round_trippers.go:580]     Content-Length: 261
	I1024 19:21:23.982149  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:23 GMT
	I1024 19:21:23.982155  565581 round_trippers.go:580]     Audit-Id: abca60b5-f2c2-4f59-89c0-a16f9a7d4d8d
	I1024 19:21:23.982161  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:23.982166  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:23.982192  565581 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f77783ad-8c04-45b1-bb02-32ae3bece6aa","resourceVersion":"303","creationTimestamp":"2023-10-24T19:21:10Z"}}]}
	I1024 19:21:23.982490  565581 default_sa.go:45] found service account: "default"
	I1024 19:21:23.982512  565581 default_sa.go:55] duration metric: took 201.235323ms for default service account to be created ...
	I1024 19:21:23.982528  565581 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:21:24.173985  565581 request.go:629] Waited for 191.352833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:21:24.174184  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:21:24.174205  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:24.174219  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:24.174236  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:24.178887  565581 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:21:24.178927  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:24.178939  565581 round_trippers.go:580]     Audit-Id: 78dcf11b-5811-4de0-8d40-6e424a5b3ce2
	I1024 19:21:24.178948  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:24.178957  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:24.178965  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:24.178976  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:24.178986  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:24 GMT
	I1024 19:21:24.179577  565581 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"406","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1024 19:21:24.181330  565581 system_pods.go:86] 8 kube-system pods found
	I1024 19:21:24.181354  565581 system_pods.go:89] "coredns-5dd5756b68-wgdhw" [fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7] Running
	I1024 19:21:24.181360  565581 system_pods.go:89] "etcd-multinode-961484" [40e3cd85-c990-47c3-9b4f-3357407912b3] Running
	I1024 19:21:24.181364  565581 system_pods.go:89] "kindnet-zgn88" [a26cc577-13fe-45ab-9899-365498d67e7e] Running
	I1024 19:21:24.181367  565581 system_pods.go:89] "kube-apiserver-multinode-961484" [ddaee20f-e0d6-4c4d-9f9e-455ef68f3c19] Running
	I1024 19:21:24.181372  565581 system_pods.go:89] "kube-controller-manager-multinode-961484" [6e58ec4f-71e0-4935-82f7-ea76ef7a7014] Running
	I1024 19:21:24.181376  565581 system_pods.go:89] "kube-proxy-87vtd" [dfc38cf1-7c84-476c-a1c6-dd1c81356cdb] Running
	I1024 19:21:24.181380  565581 system_pods.go:89] "kube-scheduler-multinode-961484" [2304ca9c-4994-4c85-8790-3e9e112351fd] Running
	I1024 19:21:24.181384  565581 system_pods.go:89] "storage-provisioner" [6ae1e99e-0a67-49f4-b89b-b708d36767cb] Running
	I1024 19:21:24.181392  565581 system_pods.go:126] duration metric: took 198.858356ms to wait for k8s-apps to be running ...
	I1024 19:21:24.181402  565581 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:21:24.181455  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:21:24.192492  565581 system_svc.go:56] duration metric: took 11.080486ms WaitForService to wait for kubelet.
	I1024 19:21:24.192523  565581 kubeadm.go:581] duration metric: took 13.744051505s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:21:24.192545  565581 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:21:24.373932  565581 request.go:629] Waited for 181.302464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1024 19:21:24.374021  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1024 19:21:24.374029  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:24.374043  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:24.374056  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:24.377741  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:24.377785  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:24.377797  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:24.377805  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:24.377813  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:24 GMT
	I1024 19:21:24.377821  565581 round_trippers.go:580]     Audit-Id: 643c2afa-6a67-4564-9452-3e79e66f0027
	I1024 19:21:24.377828  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:24.377840  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:24.378036  565581 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I1024 19:21:24.378453  565581 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:21:24.378489  565581 node_conditions.go:123] node cpu capacity is 8
	I1024 19:21:24.378505  565581 node_conditions.go:105] duration metric: took 185.955558ms to run NodePressure ...
	I1024 19:21:24.378518  565581 start.go:228] waiting for startup goroutines ...
	I1024 19:21:24.378524  565581 start.go:233] waiting for cluster config update ...
	I1024 19:21:24.378534  565581 start.go:242] writing updated cluster config ...
	I1024 19:21:24.380966  565581 out.go:177] 
	I1024 19:21:24.383092  565581 config.go:182] Loaded profile config "multinode-961484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:21:24.383177  565581 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/config.json ...
	I1024 19:21:24.385204  565581 out.go:177] * Starting worker node multinode-961484-m02 in cluster multinode-961484
	I1024 19:21:24.386731  565581 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:21:24.388391  565581 out.go:177] * Pulling base image ...
	I1024 19:21:24.390910  565581 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:21:24.390951  565581 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:21:24.390962  565581 cache.go:57] Caching tarball of preloaded images
	I1024 19:21:24.391109  565581 preload.go:174] Found /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:21:24.391127  565581 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:21:24.391215  565581 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/config.json ...
	I1024 19:21:24.411888  565581 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:21:24.411924  565581 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:21:24.411952  565581 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:21:24.412001  565581 start.go:365] acquiring machines lock for multinode-961484-m02: {Name:mk403b692eec61285e7dcfa8edd601c379f93f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:21:24.412152  565581 start.go:369] acquired machines lock for "multinode-961484-m02" in 124.176µs
	I1024 19:21:24.412203  565581 start.go:93] Provisioning new machine with config: &{Name:multinode-961484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-961484 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:21:24.412341  565581 start.go:125] createHost starting for "m02" (driver="docker")
	I1024 19:21:24.414982  565581 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1024 19:21:24.415228  565581 start.go:159] libmachine.API.Create for "multinode-961484" (driver="docker")
	I1024 19:21:24.415265  565581 client.go:168] LocalClient.Create starting
	I1024 19:21:24.415378  565581 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem
	I1024 19:21:24.415437  565581 main.go:141] libmachine: Decoding PEM data...
	I1024 19:21:24.415460  565581 main.go:141] libmachine: Parsing certificate...
	I1024 19:21:24.415537  565581 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem
	I1024 19:21:24.415564  565581 main.go:141] libmachine: Decoding PEM data...
	I1024 19:21:24.415577  565581 main.go:141] libmachine: Parsing certificate...
	I1024 19:21:24.415840  565581 cli_runner.go:164] Run: docker network inspect multinode-961484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:21:24.434968  565581 network_create.go:77] Found existing network {name:multinode-961484 subnet:0xc0025d3e00 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1024 19:21:24.435030  565581 kic.go:118] calculated static IP "192.168.58.3" for the "multinode-961484-m02" container
	I1024 19:21:24.435101  565581 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:21:24.452498  565581 cli_runner.go:164] Run: docker volume create multinode-961484-m02 --label name.minikube.sigs.k8s.io=multinode-961484-m02 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:21:24.471121  565581 oci.go:103] Successfully created a docker volume multinode-961484-m02
	I1024 19:21:24.471197  565581 cli_runner.go:164] Run: docker run --rm --name multinode-961484-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-961484-m02 --entrypoint /usr/bin/test -v multinode-961484-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:21:25.053839  565581 oci.go:107] Successfully prepared a docker volume multinode-961484-m02
	I1024 19:21:25.053877  565581 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:21:25.053899  565581 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:21:25.053970  565581 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-961484-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:21:30.578850  565581 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-961484-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (5.524812492s)
	I1024 19:21:30.578886  565581 kic.go:200] duration metric: took 5.524983 seconds to extract preloaded images to volume
	W1024 19:21:30.579039  565581 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:21:30.579128  565581 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:21:30.644691  565581 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-961484-m02 --name multinode-961484-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-961484-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-961484-m02 --network multinode-961484 --ip 192.168.58.3 --volume multinode-961484-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:21:31.004330  565581 cli_runner.go:164] Run: docker container inspect multinode-961484-m02 --format={{.State.Running}}
	I1024 19:21:31.021792  565581 cli_runner.go:164] Run: docker container inspect multinode-961484-m02 --format={{.State.Status}}
	I1024 19:21:31.040884  565581 cli_runner.go:164] Run: docker exec multinode-961484-m02 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:21:31.113999  565581 oci.go:144] the created container "multinode-961484-m02" has a running status.
	I1024 19:21:31.114052  565581 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa...
	I1024 19:21:31.286303  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1024 19:21:31.286349  565581 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:21:31.307143  565581 cli_runner.go:164] Run: docker container inspect multinode-961484-m02 --format={{.State.Status}}
	I1024 19:21:31.324962  565581 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:21:31.324996  565581 kic_runner.go:114] Args: [docker exec --privileged multinode-961484-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:21:31.395344  565581 cli_runner.go:164] Run: docker container inspect multinode-961484-m02 --format={{.State.Status}}
	I1024 19:21:31.424167  565581 machine.go:88] provisioning docker machine ...
	I1024 19:21:31.424218  565581 ubuntu.go:169] provisioning hostname "multinode-961484-m02"
	I1024 19:21:31.424320  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:31.447360  565581 main.go:141] libmachine: Using SSH client type: native
	I1024 19:21:31.447721  565581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33275 <nil> <nil>}
	I1024 19:21:31.447742  565581 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-961484-m02 && echo "multinode-961484-m02" | sudo tee /etc/hostname
	I1024 19:21:31.448511  565581 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42632->127.0.0.1:33275: read: connection reset by peer
	I1024 19:21:34.588309  565581 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-961484-m02
	
	I1024 19:21:34.588413  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:34.610182  565581 main.go:141] libmachine: Using SSH client type: native
	I1024 19:21:34.610559  565581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33275 <nil> <nil>}
	I1024 19:21:34.610580  565581 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-961484-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-961484-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-961484-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:21:34.733261  565581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:21:34.733289  565581 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-471553/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-471553/.minikube}
	I1024 19:21:34.733317  565581 ubuntu.go:177] setting up certificates
	I1024 19:21:34.733326  565581 provision.go:83] configureAuth start
	I1024 19:21:34.733412  565581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484-m02
	I1024 19:21:34.750826  565581 provision.go:138] copyHostCerts
	I1024 19:21:34.750869  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:21:34.750898  565581 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem, removing ...
	I1024 19:21:34.750906  565581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:21:34.750982  565581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem (1123 bytes)
	I1024 19:21:34.751058  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:21:34.751076  565581 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem, removing ...
	I1024 19:21:34.751080  565581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:21:34.751103  565581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem (1675 bytes)
	I1024 19:21:34.751150  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:21:34.751165  565581 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem, removing ...
	I1024 19:21:34.751172  565581 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:21:34.751194  565581 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem (1082 bytes)
	I1024 19:21:34.751243  565581 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem org=jenkins.multinode-961484-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-961484-m02]
	I1024 19:21:34.836563  565581 provision.go:172] copyRemoteCerts
	I1024 19:21:34.836664  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:21:34.836719  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:34.857444  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa Username:docker}
	I1024 19:21:34.950492  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:21:34.950566  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:21:34.980031  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:21:34.980113  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1024 19:21:35.005345  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:21:35.005430  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:21:35.031447  565581 provision.go:86] duration metric: configureAuth took 298.101899ms
	I1024 19:21:35.031481  565581 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:21:35.031649  565581 config.go:182] Loaded profile config "multinode-961484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:21:35.031742  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:35.050107  565581 main.go:141] libmachine: Using SSH client type: native
	I1024 19:21:35.050437  565581 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33275 <nil> <nil>}
	I1024 19:21:35.050454  565581 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:21:35.286130  565581 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:21:35.286158  565581 machine.go:91] provisioned docker machine in 3.861957836s
	I1024 19:21:35.286176  565581 client.go:171] LocalClient.Create took 10.870895012s
	I1024 19:21:35.286196  565581 start.go:167] duration metric: libmachine.API.Create for "multinode-961484" took 10.870973302s
	I1024 19:21:35.286205  565581 start.go:300] post-start starting for "multinode-961484-m02" (driver="docker")
	I1024 19:21:35.286217  565581 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:21:35.286302  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:21:35.286360  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:35.307007  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa Username:docker}
	I1024 19:21:35.400186  565581 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:21:35.404325  565581 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1024 19:21:35.404353  565581 command_runner.go:130] > NAME="Ubuntu"
	I1024 19:21:35.404363  565581 command_runner.go:130] > VERSION_ID="22.04"
	I1024 19:21:35.404370  565581 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1024 19:21:35.404377  565581 command_runner.go:130] > VERSION_CODENAME=jammy
	I1024 19:21:35.404383  565581 command_runner.go:130] > ID=ubuntu
	I1024 19:21:35.404389  565581 command_runner.go:130] > ID_LIKE=debian
	I1024 19:21:35.404395  565581 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1024 19:21:35.404404  565581 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1024 19:21:35.404414  565581 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1024 19:21:35.404427  565581 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1024 19:21:35.404439  565581 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1024 19:21:35.404515  565581 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:21:35.404555  565581 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:21:35.404569  565581 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:21:35.404578  565581 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:21:35.404597  565581 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/addons for local assets ...
	I1024 19:21:35.404682  565581 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/files for local assets ...
	I1024 19:21:35.404809  565581 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> 4783232.pem in /etc/ssl/certs
	I1024 19:21:35.404824  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> /etc/ssl/certs/4783232.pem
	I1024 19:21:35.404942  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:21:35.415428  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:21:35.443984  565581 start.go:303] post-start completed in 157.757373ms
	I1024 19:21:35.444518  565581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484-m02
	I1024 19:21:35.465915  565581 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/config.json ...
	I1024 19:21:35.466220  565581 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:21:35.466269  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:35.485918  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa Username:docker}
	I1024 19:21:35.574619  565581 command_runner.go:130] > 21%!
	(MISSING)I1024 19:21:35.574704  565581 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:21:35.580743  565581 command_runner.go:130] > 232G
	I1024 19:21:35.580821  565581 start.go:128] duration metric: createHost completed in 11.168465043s
	I1024 19:21:35.580835  565581 start.go:83] releasing machines lock for "multinode-961484-m02", held for 11.16866847s
	I1024 19:21:35.580921  565581 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484-m02
	I1024 19:21:35.602275  565581 out.go:177] * Found network options:
	I1024 19:21:35.604495  565581 out.go:177]   - NO_PROXY=192.168.58.2
	W1024 19:21:35.606504  565581 proxy.go:119] fail to check proxy env: Error ip not in block
	W1024 19:21:35.606562  565581 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 19:21:35.606655  565581 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:21:35.606698  565581 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:21:35.606714  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:35.606752  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:21:35.628050  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa Username:docker}
	I1024 19:21:35.629035  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa Username:docker}
	I1024 19:21:35.820820  565581 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 19:21:35.867666  565581 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:21:35.872859  565581 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1024 19:21:35.872919  565581 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1024 19:21:35.872931  565581 command_runner.go:130] > Device: b0h/176d	Inode: 2845527     Links: 1
	I1024 19:21:35.872942  565581 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:21:35.872952  565581 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1024 19:21:35.872962  565581 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1024 19:21:35.872974  565581 command_runner.go:130] > Change: 2023-10-24 19:00:55.078905836 +0000
	I1024 19:21:35.872985  565581 command_runner.go:130] >  Birth: 2023-10-24 19:00:55.078905836 +0000
	I1024 19:21:35.873142  565581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:21:35.895858  565581 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:21:35.895947  565581 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:21:35.928044  565581 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1024 19:21:35.928108  565581 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:21:35.928119  565581 start.go:472] detecting cgroup driver to use...
	I1024 19:21:35.928166  565581 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:21:35.928222  565581 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:21:35.943817  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:21:35.954723  565581 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:21:35.954781  565581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:21:35.968151  565581 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:21:35.985038  565581 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:21:36.078389  565581 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:21:36.095872  565581 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1024 19:21:36.175398  565581 docker.go:214] disabling docker service ...
	I1024 19:21:36.175483  565581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:21:36.195564  565581 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:21:36.207605  565581 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:21:36.297263  565581 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1024 19:21:36.297378  565581 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:21:36.312997  565581 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1024 19:21:36.399836  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:21:36.412295  565581 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:21:36.431496  565581 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 19:21:36.432564  565581 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:21:36.432637  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:21:36.446573  565581 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:21:36.446685  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:21:36.458806  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:21:36.470145  565581 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:21:36.480467  565581 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:21:36.489679  565581 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:21:36.497676  565581 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1024 19:21:36.498407  565581 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:21:36.507792  565581 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:21:36.592677  565581 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:21:36.713491  565581 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:21:36.713575  565581 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:21:36.718187  565581 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 19:21:36.718221  565581 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 19:21:36.718229  565581 command_runner.go:130] > Device: b9h/185d	Inode: 190         Links: 1
	I1024 19:21:36.718237  565581 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:21:36.718242  565581 command_runner.go:130] > Access: 2023-10-24 19:21:36.698035244 +0000
	I1024 19:21:36.718249  565581 command_runner.go:130] > Modify: 2023-10-24 19:21:36.698035244 +0000
	I1024 19:21:36.718254  565581 command_runner.go:130] > Change: 2023-10-24 19:21:36.698035244 +0000
	I1024 19:21:36.718258  565581 command_runner.go:130] >  Birth: -
	I1024 19:21:36.718279  565581 start.go:540] Will wait 60s for crictl version
	I1024 19:21:36.718322  565581 ssh_runner.go:195] Run: which crictl
	I1024 19:21:36.722460  565581 command_runner.go:130] > /usr/bin/crictl
	I1024 19:21:36.722551  565581 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:21:36.759569  565581 command_runner.go:130] > Version:  0.1.0
	I1024 19:21:36.759602  565581 command_runner.go:130] > RuntimeName:  cri-o
	I1024 19:21:36.759611  565581 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1024 19:21:36.759620  565581 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 19:21:36.759648  565581 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:21:36.759716  565581 ssh_runner.go:195] Run: crio --version
	I1024 19:21:36.794667  565581 command_runner.go:130] > crio version 1.24.6
	I1024 19:21:36.794703  565581 command_runner.go:130] > Version:          1.24.6
	I1024 19:21:36.794713  565581 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 19:21:36.794720  565581 command_runner.go:130] > GitTreeState:     clean
	I1024 19:21:36.794730  565581 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 19:21:36.794736  565581 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 19:21:36.794743  565581 command_runner.go:130] > Compiler:         gc
	I1024 19:21:36.794750  565581 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:21:36.794757  565581 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:21:36.794765  565581 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:21:36.794769  565581 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:21:36.794773  565581 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:21:36.796666  565581 ssh_runner.go:195] Run: crio --version
	I1024 19:21:36.834494  565581 command_runner.go:130] > crio version 1.24.6
	I1024 19:21:36.834527  565581 command_runner.go:130] > Version:          1.24.6
	I1024 19:21:36.834534  565581 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 19:21:36.834539  565581 command_runner.go:130] > GitTreeState:     clean
	I1024 19:21:36.834545  565581 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 19:21:36.834550  565581 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 19:21:36.834554  565581 command_runner.go:130] > Compiler:         gc
	I1024 19:21:36.834559  565581 command_runner.go:130] > Platform:         linux/amd64
	I1024 19:21:36.834565  565581 command_runner.go:130] > Linkmode:         dynamic
	I1024 19:21:36.834577  565581 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 19:21:36.834584  565581 command_runner.go:130] > SeccompEnabled:   true
	I1024 19:21:36.834596  565581 command_runner.go:130] > AppArmorEnabled:  false
	I1024 19:21:36.837173  565581 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 19:21:36.839149  565581 out.go:177]   - env NO_PROXY=192.168.58.2
	I1024 19:21:36.840991  565581 cli_runner.go:164] Run: docker network inspect multinode-961484 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:21:36.858183  565581 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1024 19:21:36.863641  565581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:21:36.876633  565581 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484 for IP: 192.168.58.3
	I1024 19:21:36.876670  565581 certs.go:190] acquiring lock for shared ca certs: {Name:mkd071e4924662af2a94ad3f2018330ff8506826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:21:36.876838  565581 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key
	I1024 19:21:36.876883  565581 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key
	I1024 19:21:36.876901  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:21:36.876916  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:21:36.876929  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:21:36.876941  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:21:36.877004  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem (1338 bytes)
	W1024 19:21:36.877202  565581 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323_empty.pem, impossibly tiny 0 bytes
	I1024 19:21:36.877235  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:21:36.877267  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:21:36.877290  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:21:36.877321  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem (1675 bytes)
	I1024 19:21:36.877388  565581 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:21:36.877427  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem -> /usr/share/ca-certificates/478323.pem
	I1024 19:21:36.877445  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> /usr/share/ca-certificates/4783232.pem
	I1024 19:21:36.877457  565581 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:21:36.878075  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:21:36.908896  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1024 19:21:36.934938  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:21:36.961398  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:21:36.986018  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem --> /usr/share/ca-certificates/478323.pem (1338 bytes)
	I1024 19:21:37.014620  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /usr/share/ca-certificates/4783232.pem (1708 bytes)
	I1024 19:21:37.042796  565581 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:21:37.067696  565581 ssh_runner.go:195] Run: openssl version
	I1024 19:21:37.073352  565581 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1024 19:21:37.073445  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478323.pem && ln -fs /usr/share/ca-certificates/478323.pem /etc/ssl/certs/478323.pem"
	I1024 19:21:37.083113  565581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478323.pem
	I1024 19:21:37.087240  565581 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:07 /usr/share/ca-certificates/478323.pem
	I1024 19:21:37.087288  565581 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:07 /usr/share/ca-certificates/478323.pem
	I1024 19:21:37.087332  565581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478323.pem
	I1024 19:21:37.094409  565581 command_runner.go:130] > 51391683
	I1024 19:21:37.094503  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478323.pem /etc/ssl/certs/51391683.0"
	I1024 19:21:37.103829  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783232.pem && ln -fs /usr/share/ca-certificates/4783232.pem /etc/ssl/certs/4783232.pem"
	I1024 19:21:37.112954  565581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783232.pem
	I1024 19:21:37.116733  565581 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:07 /usr/share/ca-certificates/4783232.pem
	I1024 19:21:37.116795  565581 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:07 /usr/share/ca-certificates/4783232.pem
	I1024 19:21:37.116861  565581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783232.pem
	I1024 19:21:37.123294  565581 command_runner.go:130] > 3ec20f2e
	I1024 19:21:37.123383  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783232.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:21:37.133242  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:21:37.143475  565581 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:21:37.147209  565581 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:21:37.147261  565581 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:21:37.147300  565581 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:21:37.154108  565581 command_runner.go:130] > b5213941
	I1024 19:21:37.154189  565581 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:21:37.164057  565581 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:21:37.167793  565581 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:21:37.167846  565581 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:21:37.167965  565581 ssh_runner.go:195] Run: crio config
	I1024 19:21:37.219218  565581 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 19:21:37.219255  565581 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 19:21:37.219268  565581 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 19:21:37.219275  565581 command_runner.go:130] > #
	I1024 19:21:37.219288  565581 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 19:21:37.219300  565581 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 19:21:37.219310  565581 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 19:21:37.219329  565581 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 19:21:37.219342  565581 command_runner.go:130] > # reload'.
	I1024 19:21:37.219354  565581 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 19:21:37.219378  565581 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 19:21:37.219394  565581 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 19:21:37.219405  565581 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 19:21:37.219415  565581 command_runner.go:130] > [crio]
	I1024 19:21:37.219433  565581 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 19:21:37.219467  565581 command_runner.go:130] > # containers images, in this directory.
	I1024 19:21:37.219488  565581 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1024 19:21:37.219504  565581 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 19:21:37.219515  565581 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1024 19:21:37.219527  565581 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 19:21:37.219539  565581 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 19:21:37.219554  565581 command_runner.go:130] > # storage_driver = "vfs"
	I1024 19:21:37.219569  565581 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 19:21:37.219583  565581 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 19:21:37.219594  565581 command_runner.go:130] > # storage_option = [
	I1024 19:21:37.219604  565581 command_runner.go:130] > # ]
	I1024 19:21:37.219616  565581 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 19:21:37.219629  565581 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 19:21:37.219655  565581 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 19:21:37.219665  565581 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 19:21:37.219674  565581 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 19:21:37.219680  565581 command_runner.go:130] > # always happen on a node reboot
	I1024 19:21:37.219689  565581 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 19:21:37.219701  565581 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 19:21:37.219715  565581 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 19:21:37.219732  565581 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 19:21:37.219744  565581 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 19:21:37.219760  565581 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 19:21:37.219776  565581 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 19:21:37.219786  565581 command_runner.go:130] > # internal_wipe = true
	I1024 19:21:37.219796  565581 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 19:21:37.219809  565581 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 19:21:37.219823  565581 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 19:21:37.219835  565581 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 19:21:37.219849  565581 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 19:21:37.219856  565581 command_runner.go:130] > [crio.api]
	I1024 19:21:37.219870  565581 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 19:21:37.219877  565581 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 19:21:37.219889  565581 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 19:21:37.219897  565581 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 19:21:37.219914  565581 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 19:21:37.219927  565581 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 19:21:37.219938  565581 command_runner.go:130] > # stream_port = "0"
	I1024 19:21:37.219950  565581 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 19:21:37.219957  565581 command_runner.go:130] > # stream_enable_tls = false
	I1024 19:21:37.219967  565581 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 19:21:37.219978  565581 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 19:21:37.219989  565581 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 19:21:37.220003  565581 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 19:21:37.220014  565581 command_runner.go:130] > # minutes.
	I1024 19:21:37.220025  565581 command_runner.go:130] > # stream_tls_cert = ""
	I1024 19:21:37.220037  565581 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 19:21:37.220051  565581 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 19:21:37.220062  565581 command_runner.go:130] > # stream_tls_key = ""
	I1024 19:21:37.220074  565581 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 19:21:37.220088  565581 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 19:21:37.220100  565581 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 19:21:37.220111  565581 command_runner.go:130] > # stream_tls_ca = ""
	I1024 19:21:37.220125  565581 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:21:37.220137  565581 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1024 19:21:37.220152  565581 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 19:21:37.220164  565581 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1024 19:21:37.220218  565581 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 19:21:37.220231  565581 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 19:21:37.220239  565581 command_runner.go:130] > [crio.runtime]
	I1024 19:21:37.220253  565581 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 19:21:37.220267  565581 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 19:21:37.220280  565581 command_runner.go:130] > # "nofile=1024:2048"
	I1024 19:21:37.220294  565581 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 19:21:37.220304  565581 command_runner.go:130] > # default_ulimits = [
	I1024 19:21:37.220313  565581 command_runner.go:130] > # ]
	I1024 19:21:37.220325  565581 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 19:21:37.220335  565581 command_runner.go:130] > # no_pivot = false
	I1024 19:21:37.220348  565581 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 19:21:37.220363  565581 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 19:21:37.220376  565581 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 19:21:37.220391  565581 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 19:21:37.220402  565581 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 19:21:37.220415  565581 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:21:37.220424  565581 command_runner.go:130] > # conmon = ""
	I1024 19:21:37.220432  565581 command_runner.go:130] > # Cgroup setting for conmon
	I1024 19:21:37.220445  565581 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 19:21:37.220461  565581 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 19:21:37.220471  565581 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 19:21:37.220481  565581 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 19:21:37.220496  565581 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 19:21:37.220506  565581 command_runner.go:130] > # conmon_env = [
	I1024 19:21:37.220514  565581 command_runner.go:130] > # ]
	I1024 19:21:37.220527  565581 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 19:21:37.220539  565581 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 19:21:37.220549  565581 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 19:21:37.220555  565581 command_runner.go:130] > # default_env = [
	I1024 19:21:37.220562  565581 command_runner.go:130] > # ]
	I1024 19:21:37.220572  565581 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 19:21:37.220582  565581 command_runner.go:130] > # selinux = false
	I1024 19:21:37.220596  565581 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 19:21:37.220610  565581 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 19:21:37.220622  565581 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 19:21:37.220629  565581 command_runner.go:130] > # seccomp_profile = ""
	I1024 19:21:37.220638  565581 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 19:21:37.220647  565581 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 19:21:37.220658  565581 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 19:21:37.220671  565581 command_runner.go:130] > # which might increase security.
	I1024 19:21:37.220678  565581 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1024 19:21:37.220688  565581 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 19:21:37.220698  565581 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 19:21:37.220709  565581 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 19:21:37.220726  565581 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 19:21:37.220736  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:21:37.220744  565581 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 19:21:37.220753  565581 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 19:21:37.220761  565581 command_runner.go:130] > # the cgroup blockio controller.
	I1024 19:21:37.220793  565581 command_runner.go:130] > # blockio_config_file = ""
	I1024 19:21:37.220805  565581 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 19:21:37.220812  565581 command_runner.go:130] > # irqbalance daemon.
	I1024 19:21:37.220822  565581 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 19:21:37.220832  565581 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 19:21:37.220841  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:21:37.220850  565581 command_runner.go:130] > # rdt_config_file = ""
	I1024 19:21:37.220858  565581 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 19:21:37.220866  565581 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 19:21:37.220877  565581 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 19:21:37.220886  565581 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 19:21:37.220896  565581 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 19:21:37.220911  565581 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 19:21:37.220919  565581 command_runner.go:130] > # will be added.
	I1024 19:21:37.220928  565581 command_runner.go:130] > # default_capabilities = [
	I1024 19:21:37.220934  565581 command_runner.go:130] > # 	"CHOWN",
	I1024 19:21:37.220941  565581 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 19:21:37.220952  565581 command_runner.go:130] > # 	"FSETID",
	I1024 19:21:37.220959  565581 command_runner.go:130] > # 	"FOWNER",
	I1024 19:21:37.220967  565581 command_runner.go:130] > # 	"SETGID",
	I1024 19:21:37.220974  565581 command_runner.go:130] > # 	"SETUID",
	I1024 19:21:37.220981  565581 command_runner.go:130] > # 	"SETPCAP",
	I1024 19:21:37.220988  565581 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 19:21:37.220996  565581 command_runner.go:130] > # 	"KILL",
	I1024 19:21:37.221002  565581 command_runner.go:130] > # ]
	I1024 19:21:37.221018  565581 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1024 19:21:37.221031  565581 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1024 19:21:37.221040  565581 command_runner.go:130] > # add_inheritable_capabilities = true
	I1024 19:21:37.221053  565581 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 19:21:37.221067  565581 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:21:37.221077  565581 command_runner.go:130] > # default_sysctls = [
	I1024 19:21:37.221084  565581 command_runner.go:130] > # ]
	I1024 19:21:37.221096  565581 command_runner.go:130] > # List of devices on the host that a
	I1024 19:21:37.221107  565581 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 19:21:37.221117  565581 command_runner.go:130] > # allowed_devices = [
	I1024 19:21:37.221124  565581 command_runner.go:130] > # 	"/dev/fuse",
	I1024 19:21:37.221133  565581 command_runner.go:130] > # ]
	I1024 19:21:37.221145  565581 command_runner.go:130] > # List of additional devices. specified as
	I1024 19:21:37.221205  565581 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 19:21:37.221218  565581 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 19:21:37.221228  565581 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 19:21:37.221235  565581 command_runner.go:130] > # additional_devices = [
	I1024 19:21:37.221240  565581 command_runner.go:130] > # ]
	I1024 19:21:37.221250  565581 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 19:21:37.221260  565581 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 19:21:37.221269  565581 command_runner.go:130] > # 	"/etc/cdi",
	I1024 19:21:37.221278  565581 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 19:21:37.221284  565581 command_runner.go:130] > # ]
	I1024 19:21:37.221314  565581 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 19:21:37.221330  565581 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 19:21:37.221341  565581 command_runner.go:130] > # Defaults to false.
	I1024 19:21:37.221354  565581 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 19:21:37.221370  565581 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 19:21:37.221385  565581 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 19:21:37.221396  565581 command_runner.go:130] > # hooks_dir = [
	I1024 19:21:37.221407  565581 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 19:21:37.221414  565581 command_runner.go:130] > # ]
	I1024 19:21:37.221424  565581 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 19:21:37.221434  565581 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 19:21:37.221443  565581 command_runner.go:130] > # its default mounts from the following two files:
	I1024 19:21:37.221454  565581 command_runner.go:130] > #
	I1024 19:21:37.221466  565581 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 19:21:37.221479  565581 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 19:21:37.221487  565581 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 19:21:37.221492  565581 command_runner.go:130] > #
	I1024 19:21:37.221499  565581 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 19:21:37.221505  565581 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 19:21:37.221512  565581 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 19:21:37.221517  565581 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 19:21:37.221520  565581 command_runner.go:130] > #
	I1024 19:21:37.221525  565581 command_runner.go:130] > # default_mounts_file = ""
	I1024 19:21:37.221530  565581 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 19:21:37.221538  565581 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 19:21:37.221542  565581 command_runner.go:130] > # pids_limit = 0
	I1024 19:21:37.221548  565581 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 19:21:37.221553  565581 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 19:21:37.221559  565581 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 19:21:37.221567  565581 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 19:21:37.221574  565581 command_runner.go:130] > # log_size_max = -1
	I1024 19:21:37.221581  565581 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 19:21:37.221585  565581 command_runner.go:130] > # log_to_journald = false
	I1024 19:21:37.221591  565581 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 19:21:37.221596  565581 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 19:21:37.221601  565581 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 19:21:37.221606  565581 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 19:21:37.221611  565581 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 19:21:37.221615  565581 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 19:21:37.221621  565581 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 19:21:37.221625  565581 command_runner.go:130] > # read_only = false
	I1024 19:21:37.221630  565581 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 19:21:37.221636  565581 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 19:21:37.221641  565581 command_runner.go:130] > # live configuration reload.
	I1024 19:21:37.221645  565581 command_runner.go:130] > # log_level = "info"
	I1024 19:21:37.221650  565581 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 19:21:37.221655  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:21:37.221659  565581 command_runner.go:130] > # log_filter = ""
	I1024 19:21:37.221665  565581 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 19:21:37.221670  565581 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 19:21:37.221674  565581 command_runner.go:130] > # separated by comma.
	I1024 19:21:37.221678  565581 command_runner.go:130] > # uid_mappings = ""
	I1024 19:21:37.221684  565581 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 19:21:37.221690  565581 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 19:21:37.221693  565581 command_runner.go:130] > # separated by comma.
	I1024 19:21:37.221697  565581 command_runner.go:130] > # gid_mappings = ""
	I1024 19:21:37.221703  565581 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 19:21:37.221709  565581 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:21:37.221715  565581 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:21:37.221719  565581 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 19:21:37.221725  565581 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 19:21:37.221731  565581 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 19:21:37.221737  565581 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 19:21:37.221741  565581 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 19:21:37.221747  565581 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 19:21:37.221752  565581 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 19:21:37.221757  565581 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 19:21:37.221763  565581 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 19:21:37.221768  565581 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 19:21:37.221807  565581 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 19:21:37.221813  565581 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 19:21:37.221818  565581 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 19:21:37.221822  565581 command_runner.go:130] > # drop_infra_ctr = true
	I1024 19:21:37.221828  565581 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 19:21:37.221833  565581 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 19:21:37.221840  565581 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 19:21:37.221845  565581 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 19:21:37.221850  565581 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 19:21:37.221855  565581 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 19:21:37.221860  565581 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 19:21:37.221866  565581 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 19:21:37.221870  565581 command_runner.go:130] > # pinns_path = ""
	I1024 19:21:37.221876  565581 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 19:21:37.221882  565581 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 19:21:37.221888  565581 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 19:21:37.221892  565581 command_runner.go:130] > # default_runtime = "runc"
	I1024 19:21:37.221897  565581 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 19:21:37.221904  565581 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 19:21:37.221914  565581 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 19:21:37.221919  565581 command_runner.go:130] > # creation as a file is not desired either.
	I1024 19:21:37.221927  565581 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 19:21:37.221932  565581 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 19:21:37.221937  565581 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 19:21:37.221940  565581 command_runner.go:130] > # ]
	I1024 19:21:37.221946  565581 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 19:21:37.221953  565581 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 19:21:37.221959  565581 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 19:21:37.221965  565581 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 19:21:37.221968  565581 command_runner.go:130] > #
	I1024 19:21:37.221973  565581 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 19:21:37.221978  565581 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 19:21:37.221982  565581 command_runner.go:130] > #  runtime_type = "oci"
	I1024 19:21:37.221988  565581 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 19:21:37.221993  565581 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 19:21:37.221997  565581 command_runner.go:130] > #  allowed_annotations = []
	I1024 19:21:37.222000  565581 command_runner.go:130] > # Where:
	I1024 19:21:37.222005  565581 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 19:21:37.222011  565581 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 19:21:37.222017  565581 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 19:21:37.222023  565581 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 19:21:37.222027  565581 command_runner.go:130] > #   in $PATH.
	I1024 19:21:37.222033  565581 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 19:21:37.222037  565581 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 19:21:37.222043  565581 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 19:21:37.222047  565581 command_runner.go:130] > #   state.
	I1024 19:21:37.222054  565581 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 19:21:37.222059  565581 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 19:21:37.222065  565581 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 19:21:37.222070  565581 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 19:21:37.222076  565581 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 19:21:37.222083  565581 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 19:21:37.222087  565581 command_runner.go:130] > #   The currently recognized values are:
	I1024 19:21:37.222093  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 19:21:37.222100  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 19:21:37.222106  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 19:21:37.222111  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 19:21:37.222119  565581 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 19:21:37.222125  565581 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 19:21:37.222131  565581 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 19:21:37.222137  565581 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 19:21:37.222142  565581 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 19:21:37.222146  565581 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 19:21:37.222151  565581 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1024 19:21:37.222155  565581 command_runner.go:130] > runtime_type = "oci"
	I1024 19:21:37.222159  565581 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 19:21:37.222163  565581 command_runner.go:130] > runtime_config_path = ""
	I1024 19:21:37.222167  565581 command_runner.go:130] > monitor_path = ""
	I1024 19:21:37.222171  565581 command_runner.go:130] > monitor_cgroup = ""
	I1024 19:21:37.222176  565581 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 19:21:37.222234  565581 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 19:21:37.222239  565581 command_runner.go:130] > # running containers
	I1024 19:21:37.222244  565581 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 19:21:37.222250  565581 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 19:21:37.222256  565581 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 19:21:37.222261  565581 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 19:21:37.222266  565581 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 19:21:37.222273  565581 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 19:21:37.222278  565581 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 19:21:37.222282  565581 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 19:21:37.222287  565581 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 19:21:37.222291  565581 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 19:21:37.222297  565581 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 19:21:37.222302  565581 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 19:21:37.222308  565581 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 19:21:37.222315  565581 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 19:21:37.222323  565581 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 19:21:37.222328  565581 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 19:21:37.222337  565581 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 19:21:37.222344  565581 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 19:21:37.222349  565581 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 19:21:37.222358  565581 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 19:21:37.222362  565581 command_runner.go:130] > # Example:
	I1024 19:21:37.222366  565581 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 19:21:37.222372  565581 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 19:21:37.222377  565581 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 19:21:37.222382  565581 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 19:21:37.222385  565581 command_runner.go:130] > # cpuset = 0
	I1024 19:21:37.222389  565581 command_runner.go:130] > # cpushares = "0-1"
	I1024 19:21:37.222394  565581 command_runner.go:130] > # Where:
	I1024 19:21:37.222398  565581 command_runner.go:130] > # The workload name is workload-type.
	I1024 19:21:37.222405  565581 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 19:21:37.222410  565581 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 19:21:37.222416  565581 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 19:21:37.222423  565581 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 19:21:37.222430  565581 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 19:21:37.222434  565581 command_runner.go:130] > # 
	I1024 19:21:37.222440  565581 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 19:21:37.222443  565581 command_runner.go:130] > #
	I1024 19:21:37.222455  565581 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 19:21:37.222462  565581 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 19:21:37.222467  565581 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 19:21:37.222474  565581 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 19:21:37.222479  565581 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 19:21:37.222483  565581 command_runner.go:130] > [crio.image]
	I1024 19:21:37.222488  565581 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 19:21:37.222493  565581 command_runner.go:130] > # default_transport = "docker://"
	I1024 19:21:37.222499  565581 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 19:21:37.222505  565581 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:21:37.222509  565581 command_runner.go:130] > # global_auth_file = ""
	I1024 19:21:37.222514  565581 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 19:21:37.222519  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:21:37.222524  565581 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 19:21:37.222530  565581 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 19:21:37.222536  565581 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 19:21:37.222540  565581 command_runner.go:130] > # This option supports live configuration reload.
	I1024 19:21:37.222545  565581 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 19:21:37.222550  565581 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 19:21:37.222556  565581 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 19:21:37.222562  565581 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 19:21:37.222568  565581 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 19:21:37.222572  565581 command_runner.go:130] > # pause_command = "/pause"
	I1024 19:21:37.222578  565581 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 19:21:37.222584  565581 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 19:21:37.222590  565581 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 19:21:37.222597  565581 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 19:21:37.222602  565581 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 19:21:37.222606  565581 command_runner.go:130] > # signature_policy = ""
	I1024 19:21:37.222630  565581 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 19:21:37.222637  565581 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 19:21:37.222642  565581 command_runner.go:130] > # changing them here.
	I1024 19:21:37.222646  565581 command_runner.go:130] > # insecure_registries = [
	I1024 19:21:37.222650  565581 command_runner.go:130] > # ]
	I1024 19:21:37.222656  565581 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 19:21:37.222661  565581 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 19:21:37.222665  565581 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 19:21:37.222670  565581 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 19:21:37.222674  565581 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 19:21:37.222680  565581 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 19:21:37.222684  565581 command_runner.go:130] > # CNI plugins.
	I1024 19:21:37.222688  565581 command_runner.go:130] > [crio.network]
	I1024 19:21:37.222694  565581 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 19:21:37.222699  565581 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 19:21:37.222703  565581 command_runner.go:130] > # cni_default_network = ""
	I1024 19:21:37.222709  565581 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 19:21:37.222713  565581 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 19:21:37.222718  565581 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 19:21:37.222722  565581 command_runner.go:130] > # plugin_dirs = [
	I1024 19:21:37.222726  565581 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 19:21:37.222729  565581 command_runner.go:130] > # ]
	I1024 19:21:37.222735  565581 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 19:21:37.222738  565581 command_runner.go:130] > [crio.metrics]
	I1024 19:21:37.222743  565581 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 19:21:37.222747  565581 command_runner.go:130] > # enable_metrics = false
	I1024 19:21:37.222751  565581 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 19:21:37.222756  565581 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 19:21:37.222762  565581 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 19:21:37.222768  565581 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 19:21:37.222773  565581 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 19:21:37.222777  565581 command_runner.go:130] > # metrics_collectors = [
	I1024 19:21:37.222781  565581 command_runner.go:130] > # 	"operations",
	I1024 19:21:37.222786  565581 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 19:21:37.222791  565581 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 19:21:37.222795  565581 command_runner.go:130] > # 	"operations_errors",
	I1024 19:21:37.222799  565581 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 19:21:37.222803  565581 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 19:21:37.222808  565581 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 19:21:37.222812  565581 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 19:21:37.222817  565581 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 19:21:37.222823  565581 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 19:21:37.222830  565581 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 19:21:37.222836  565581 command_runner.go:130] > # 	"containers_oom_total",
	I1024 19:21:37.222842  565581 command_runner.go:130] > # 	"containers_oom",
	I1024 19:21:37.222850  565581 command_runner.go:130] > # 	"processes_defunct",
	I1024 19:21:37.222856  565581 command_runner.go:130] > # 	"operations_total",
	I1024 19:21:37.222863  565581 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 19:21:37.222871  565581 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 19:21:37.222889  565581 command_runner.go:130] > # 	"operations_errors_total",
	I1024 19:21:37.222894  565581 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 19:21:37.222898  565581 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 19:21:37.222902  565581 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 19:21:37.222907  565581 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 19:21:37.222911  565581 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 19:21:37.222915  565581 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 19:21:37.222919  565581 command_runner.go:130] > # ]
	I1024 19:21:37.222924  565581 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 19:21:37.222932  565581 command_runner.go:130] > # metrics_port = 9090
	I1024 19:21:37.222940  565581 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 19:21:37.222947  565581 command_runner.go:130] > # metrics_socket = ""
	I1024 19:21:37.222957  565581 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 19:21:37.222968  565581 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 19:21:37.222978  565581 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 19:21:37.222983  565581 command_runner.go:130] > # certificate on any modification event.
	I1024 19:21:37.222988  565581 command_runner.go:130] > # metrics_cert = ""
	I1024 19:21:37.222993  565581 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 19:21:37.222997  565581 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 19:21:37.223001  565581 command_runner.go:130] > # metrics_key = ""
	I1024 19:21:37.223008  565581 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 19:21:37.223012  565581 command_runner.go:130] > [crio.tracing]
	I1024 19:21:37.223017  565581 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 19:21:37.223021  565581 command_runner.go:130] > # enable_tracing = false
	I1024 19:21:37.223028  565581 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 19:21:37.223035  565581 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 19:21:37.223044  565581 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 19:21:37.223052  565581 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 19:21:37.223062  565581 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 19:21:37.223069  565581 command_runner.go:130] > [crio.stats]
	I1024 19:21:37.223078  565581 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 19:21:37.223083  565581 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 19:21:37.223088  565581 command_runner.go:130] > # stats_collection_period = 0
	I1024 19:21:37.225923  565581 command_runner.go:130] ! time="2023-10-24 19:21:37.215744967Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1024 19:21:37.225969  565581 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 19:21:37.226062  565581 cni.go:84] Creating CNI manager for ""
	I1024 19:21:37.226073  565581 cni.go:136] 2 nodes found, recommending kindnet
	I1024 19:21:37.226087  565581 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:21:37.226118  565581 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-961484 NodeName:multinode-961484-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:21:37.226290  565581 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-961484-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:21:37.226370  565581 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-961484-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-961484 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:21:37.226444  565581 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:21:37.237818  565581 command_runner.go:130] > kubeadm
	I1024 19:21:37.237842  565581 command_runner.go:130] > kubectl
	I1024 19:21:37.237848  565581 command_runner.go:130] > kubelet
	I1024 19:21:37.237876  565581 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:21:37.237947  565581 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1024 19:21:37.248696  565581 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1024 19:21:37.268940  565581 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:21:37.286831  565581 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:21:37.290500  565581 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:21:37.302251  565581 host.go:66] Checking if "multinode-961484" exists ...
	I1024 19:21:37.302635  565581 config.go:182] Loaded profile config "multinode-961484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:21:37.302571  565581 start.go:304] JoinCluster: &{Name:multinode-961484 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-961484 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:21:37.302754  565581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1024 19:21:37.302808  565581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:21:37.325272  565581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:21:37.475252  565581 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zb64nl.tintn3kq71xp2d3q --discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 
	I1024 19:21:37.479911  565581 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:21:37.479964  565581 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zb64nl.tintn3kq71xp2d3q --discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-961484-m02"
	I1024 19:21:37.523471  565581 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 19:21:37.562942  565581 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:21:37.562971  565581 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-gcp
	I1024 19:21:37.562980  565581 command_runner.go:130] > OS: Linux
	I1024 19:21:37.562989  565581 command_runner.go:130] > CGROUPS_CPU: enabled
	I1024 19:21:37.562998  565581 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1024 19:21:37.563006  565581 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1024 19:21:37.563014  565581 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1024 19:21:37.563022  565581 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1024 19:21:37.563033  565581 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1024 19:21:37.563047  565581 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1024 19:21:37.563059  565581 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1024 19:21:37.563072  565581 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1024 19:21:37.659392  565581 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1024 19:21:37.659425  565581 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1024 19:21:37.688141  565581 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:21:37.688169  565581 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:21:37.688175  565581 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 19:21:37.770276  565581 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1024 19:21:39.786826  565581 command_runner.go:130] > This node has joined the cluster:
	I1024 19:21:39.786848  565581 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1024 19:21:39.786855  565581 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1024 19:21:39.786861  565581 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1024 19:21:39.789724  565581 command_runner.go:130] ! W1024 19:21:37.522743    1106 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1024 19:21:39.789752  565581 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-gcp\n", err: exit status 1
	I1024 19:21:39.789761  565581 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:21:39.789783  565581 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zb64nl.tintn3kq71xp2d3q --discovery-token-ca-cert-hash sha256:d853c742f30e3231fb4e75ce3290ca65b4dc42efdf1b2f51d52e58ff321fbee8 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-961484-m02": (2.309778644s)
	I1024 19:21:39.789811  565581 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1024 19:21:39.895562  565581 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1024 19:21:39.983266  565581 start.go:306] JoinCluster complete in 2.680687254s
	I1024 19:21:39.983306  565581 cni.go:84] Creating CNI manager for ""
	I1024 19:21:39.983315  565581 cni.go:136] 2 nodes found, recommending kindnet
	I1024 19:21:39.983508  565581 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:21:39.988881  565581 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 19:21:39.988915  565581 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I1024 19:21:39.988923  565581 command_runner.go:130] > Device: 37h/55d	Inode: 2849762     Links: 1
	I1024 19:21:39.988929  565581 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 19:21:39.988935  565581 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I1024 19:21:39.988941  565581 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I1024 19:21:39.988946  565581 command_runner.go:130] > Change: 2023-10-24 19:00:55.566952662 +0000
	I1024 19:21:39.988951  565581 command_runner.go:130] >  Birth: 2023-10-24 19:00:55.538949975 +0000
	I1024 19:21:39.989041  565581 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:21:39.989056  565581 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:21:40.010949  565581 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:21:40.325978  565581 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:21:40.326013  565581 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1024 19:21:40.326020  565581 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1024 19:21:40.326025  565581 command_runner.go:130] > daemonset.apps/kindnet configured
	I1024 19:21:40.326355  565581 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:21:40.326610  565581 kapi.go:59] client config for multinode-961484: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:21:40.326885  565581 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 19:21:40.326896  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:40.326903  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:40.326909  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:40.329144  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:40.329164  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:40.329171  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:40.329177  565581 round_trippers.go:580]     Content-Length: 291
	I1024 19:21:40.329182  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:40 GMT
	I1024 19:21:40.329187  565581 round_trippers.go:580]     Audit-Id: 1ab00aa2-4502-4f92-be1f-d440255677ce
	I1024 19:21:40.329192  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:40.329197  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:40.329202  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:40.329225  565581 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"72307e84-17f5-44e0-9f8d-7067b45ba693","resourceVersion":"410","creationTimestamp":"2023-10-24T19:20:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1024 19:21:40.329323  565581 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-961484" context rescaled to 1 replicas
	I1024 19:21:40.329355  565581 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 19:21:40.333379  565581 out.go:177] * Verifying Kubernetes components...
	I1024 19:21:40.335744  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:21:40.348524  565581 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:21:40.348737  565581 kapi.go:59] client config for multinode-961484: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/multinode-961484/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:21:40.349012  565581 node_ready.go:35] waiting up to 6m0s for node "multinode-961484-m02" to be "Ready" ...
	I1024 19:21:40.349076  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484-m02
	I1024 19:21:40.349081  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:40.349088  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:40.349094  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:40.351554  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:40.351578  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:40.351588  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:40 GMT
	I1024 19:21:40.351594  565581 round_trippers.go:580]     Audit-Id: 15909f8e-747b-4f23-b178-0dc43ba22e0e
	I1024 19:21:40.351599  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:40.351606  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:40.351614  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:40.351622  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:40.351768  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484-m02","uid":"1669495e-02f1-49d1-9241-29f3f1e840cb","resourceVersion":"451","creationTimestamp":"2023-10-24T19:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1024 19:21:40.352104  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484-m02
	I1024 19:21:40.352114  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:40.352121  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:40.352127  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:40.355134  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:40.355165  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:40.355173  565581 round_trippers.go:580]     Audit-Id: eb5e6af6-2f81-4eac-adb5-729a28c2e8b4
	I1024 19:21:40.355183  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:40.355189  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:40.355195  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:40.355200  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:40.355206  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:40 GMT
	I1024 19:21:40.355384  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484-m02","uid":"1669495e-02f1-49d1-9241-29f3f1e840cb","resourceVersion":"451","creationTimestamp":"2023-10-24T19:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1024 19:21:40.856965  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484-m02
	I1024 19:21:40.857001  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:40.857011  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:40.857034  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:40.860459  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:40.860484  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:40.860492  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:40.860498  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:40 GMT
	I1024 19:21:40.860503  565581 round_trippers.go:580]     Audit-Id: 4f6e6a3b-2fdc-4518-9db2-d22c6a432074
	I1024 19:21:40.860508  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:40.860514  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:40.860519  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:40.860938  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484-m02","uid":"1669495e-02f1-49d1-9241-29f3f1e840cb","resourceVersion":"451","creationTimestamp":"2023-10-24T19:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1024 19:21:41.356741  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484-m02
	I1024 19:21:41.356817  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.356831  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.356842  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.360383  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:41.360408  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.360416  565581 round_trippers.go:580]     Audit-Id: e5575e6c-ff02-4f36-941f-f8e18bb1ba44
	I1024 19:21:41.360422  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.360427  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.360432  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.360437  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.360442  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.360667  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484-m02","uid":"1669495e-02f1-49d1-9241-29f3f1e840cb","resourceVersion":"451","creationTimestamp":"2023-10-24T19:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I1024 19:21:41.856327  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484-m02
	I1024 19:21:41.856364  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.856377  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.856385  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.859995  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:41.860159  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.860177  565581 round_trippers.go:580]     Audit-Id: a295ab2e-6dc6-4a81-9a87-803887e787aa
	I1024 19:21:41.860185  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.860192  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.860198  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.860208  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.860217  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.860462  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484-m02","uid":"1669495e-02f1-49d1-9241-29f3f1e840cb","resourceVersion":"468","creationTimestamp":"2023-10-24T19:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1024 19:21:41.861028  565581 node_ready.go:49] node "multinode-961484-m02" has status "Ready":"True"
	I1024 19:21:41.861059  565581 node_ready.go:38] duration metric: took 1.51203044s waiting for node "multinode-961484-m02" to be "Ready" ...
	I1024 19:21:41.861073  565581 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:21:41.861152  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:21:41.861157  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.861168  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.861174  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.866364  565581 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 19:21:41.866395  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.866403  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.866410  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.866416  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.866422  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.866435  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.866442  565581 round_trippers.go:580]     Audit-Id: 78cc95a1-8673-46d5-af1c-b896e531c0b8
	I1024 19:21:41.867046  565581 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"468"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"406","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1024 19:21:41.869200  565581 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.869305  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wgdhw
	I1024 19:21:41.869313  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.869322  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.869328  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.872439  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:41.872471  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.872492  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.872500  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.872507  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.872515  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.872531  565581 round_trippers.go:580]     Audit-Id: 60228f3d-34d5-4942-95a7-67cbb9060954
	I1024 19:21:41.872541  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.872698  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wgdhw","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7","resourceVersion":"406","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"adeec792-9c97-4826-a42b-d2029ced4461","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adeec792-9c97-4826-a42b-d2029ced4461\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1024 19:21:41.873298  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:41.873316  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.873324  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.873330  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.875989  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:41.876140  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.876156  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.876164  565581 round_trippers.go:580]     Audit-Id: 5183942d-8324-4006-8cba-a3a3a2f0c8d3
	I1024 19:21:41.876171  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.876179  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.876186  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.876194  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.876713  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:41.877213  565581 pod_ready.go:92] pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:41.877239  565581 pod_ready.go:81] duration metric: took 8.01296ms waiting for pod "coredns-5dd5756b68-wgdhw" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.877257  565581 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.877349  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-961484
	I1024 19:21:41.877361  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.877373  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.877382  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.880988  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:41.881086  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.881102  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.881108  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.881115  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.881120  565581 round_trippers.go:580]     Audit-Id: 0cc7e7aa-15e5-423f-afd7-d9a4c4fc2dd2
	I1024 19:21:41.881132  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.881144  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.881287  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-961484","namespace":"kube-system","uid":"40e3cd85-c990-47c3-9b4f-3357407912b3","resourceVersion":"293","creationTimestamp":"2023-10-24T19:20:55Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"a8dcbb037fe63d1a0a12d3fc24328a1e","kubernetes.io/config.mirror":"a8dcbb037fe63d1a0a12d3fc24328a1e","kubernetes.io/config.seen":"2023-10-24T19:20:50.668774383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1024 19:21:41.881787  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:41.881804  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.881815  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.881824  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.884692  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:41.884715  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.884732  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.884744  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.884752  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.884762  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.884796  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.884806  565581 round_trippers.go:580]     Audit-Id: 953a946c-8b32-4125-ad82-755ad4378a8d
	I1024 19:21:41.884947  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:41.885309  565581 pod_ready.go:92] pod "etcd-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:41.885328  565581 pod_ready.go:81] duration metric: took 8.062083ms waiting for pod "etcd-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.885350  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.885417  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-961484
	I1024 19:21:41.885427  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.885438  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.885447  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.887707  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:41.887724  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.887730  565581 round_trippers.go:580]     Audit-Id: ee87137f-1980-4fce-8919-6d5db2ce25b0
	I1024 19:21:41.887735  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.887740  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.887745  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.887751  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.887757  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.887982  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-961484","namespace":"kube-system","uid":"ddaee20f-e0d6-4c4d-9f9e-455ef68f3c19","resourceVersion":"287","creationTimestamp":"2023-10-24T19:20:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"4a9cd23fd8090ce7848f2d7b649f3664","kubernetes.io/config.mirror":"4a9cd23fd8090ce7848f2d7b649f3664","kubernetes.io/config.seen":"2023-10-24T19:20:57.153454574Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1024 19:21:41.888455  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:41.888469  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.888477  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.888483  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.890466  565581 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 19:21:41.890484  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.890491  565581 round_trippers.go:580]     Audit-Id: bece6f4e-ef8b-4ca6-a0aa-ba56b211d98d
	I1024 19:21:41.890496  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.890501  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.890506  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.890513  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.890518  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.890620  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:41.890968  565581 pod_ready.go:92] pod "kube-apiserver-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:41.890995  565581 pod_ready.go:81] duration metric: took 5.636908ms waiting for pod "kube-apiserver-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.891005  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.891066  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-961484
	I1024 19:21:41.891076  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.891082  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.891088  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.893191  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:41.893210  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.893219  565581 round_trippers.go:580]     Audit-Id: 565f47e8-1fde-453e-a46b-04bdfdcb33e8
	I1024 19:21:41.893225  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.893230  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.893236  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.893241  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.893246  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.893592  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-961484","namespace":"kube-system","uid":"6e58ec4f-71e0-4935-82f7-ea76ef7a7014","resourceVersion":"294","creationTimestamp":"2023-10-24T19:20:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"35e97d55ff71e17e9280e24931c7bc7f","kubernetes.io/config.mirror":"35e97d55ff71e17e9280e24931c7bc7f","kubernetes.io/config.seen":"2023-10-24T19:20:57.153464383Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1024 19:21:41.894080  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:41.894094  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:41.894101  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:41.894107  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:41.897222  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:41.897246  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:41.897256  565581 round_trippers.go:580]     Audit-Id: 2e9f6bc7-315f-4690-9067-18945dc4e935
	I1024 19:21:41.897262  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:41.897268  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:41.897273  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:41.897279  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:41.897284  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:41 GMT
	I1024 19:21:41.897427  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:41.897859  565581 pod_ready.go:92] pod "kube-controller-manager-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:41.897882  565581 pod_ready.go:81] duration metric: took 6.870444ms waiting for pod "kube-controller-manager-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:41.897896  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-87vtd" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:42.057401  565581 request.go:629] Waited for 159.415544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87vtd
	I1024 19:21:42.057502  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-87vtd
	I1024 19:21:42.057517  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:42.057525  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:42.057535  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:42.060559  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:42.060588  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:42.060596  565581 round_trippers.go:580]     Audit-Id: d8f61ff4-0a57-4436-bf18-6c92e4bee80a
	I1024 19:21:42.060604  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:42.060611  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:42.060620  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:42.060628  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:42.060635  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:42 GMT
	I1024 19:21:42.060804  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-87vtd","generateName":"kube-proxy-","namespace":"kube-system","uid":"dfc38cf1-7c84-476c-a1c6-dd1c81356cdb","resourceVersion":"376","creationTimestamp":"2023-10-24T19:21:10Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"60ac3a5f-4331-4153-af10-f224daecff07","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60ac3a5f-4331-4153-af10-f224daecff07\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1024 19:21:42.256856  565581 request.go:629] Waited for 195.417793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:42.256939  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:42.256945  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:42.256956  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:42.256965  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:42.260021  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:42.260050  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:42.260060  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:42 GMT
	I1024 19:21:42.260069  565581 round_trippers.go:580]     Audit-Id: 2a599692-fb10-4a7f-9184-c982771060e7
	I1024 19:21:42.260076  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:42.260084  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:42.260091  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:42.260101  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:42.260326  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:42.260821  565581 pod_ready.go:92] pod "kube-proxy-87vtd" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:42.260847  565581 pod_ready.go:81] duration metric: took 362.94184ms waiting for pod "kube-proxy-87vtd" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:42.260865  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dp8l" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:42.457374  565581 request.go:629] Waited for 196.423887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dp8l
	I1024 19:21:42.457457  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dp8l
	I1024 19:21:42.457465  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:42.457476  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:42.457487  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:42.460100  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:42.460120  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:42.460126  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:42.460132  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:42 GMT
	I1024 19:21:42.460137  565581 round_trippers.go:580]     Audit-Id: 1b2bd029-9d09-4aec-a5d2-c0491b19789c
	I1024 19:21:42.460142  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:42.460147  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:42.460152  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:42.460552  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8dp8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"73e4001e-3f31-4e76-aacd-ceb704fd653a","resourceVersion":"464","creationTimestamp":"2023-10-24T19:21:39Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"60ac3a5f-4331-4153-af10-f224daecff07","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"60ac3a5f-4331-4153-af10-f224daecff07\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 19:21:42.656895  565581 request.go:629] Waited for 195.456554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-961484-m02
	I1024 19:21:42.656989  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484-m02
	I1024 19:21:42.656996  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:42.657082  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:42.657091  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:42.661258  565581 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1024 19:21:42.661297  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:42.661309  565581 round_trippers.go:580]     Audit-Id: 6e0fa75b-c119-4484-87d0-3759aa92c98a
	I1024 19:21:42.661317  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:42.661323  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:42.661328  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:42.661333  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:42.661338  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:42 GMT
	I1024 19:21:42.661509  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484-m02","uid":"1669495e-02f1-49d1-9241-29f3f1e840cb","resourceVersion":"468","creationTimestamp":"2023-10-24T19:21:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:21:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I1024 19:21:42.662024  565581 pod_ready.go:92] pod "kube-proxy-8dp8l" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:42.662047  565581 pod_ready.go:81] duration metric: took 401.171613ms waiting for pod "kube-proxy-8dp8l" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:42.662064  565581 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:42.856460  565581 request.go:629] Waited for 194.319178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-961484
	I1024 19:21:42.856546  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-961484
	I1024 19:21:42.856551  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:42.856559  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:42.856567  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:42.858999  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:42.859029  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:42.859041  565581 round_trippers.go:580]     Audit-Id: 0097301d-b6f3-45ea-af6a-60aa43cbef99
	I1024 19:21:42.859050  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:42.859059  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:42.859067  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:42.859079  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:42.859087  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:42 GMT
	I1024 19:21:42.859221  565581 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-961484","namespace":"kube-system","uid":"2304ca9c-4994-4c85-8790-3e9e112351fd","resourceVersion":"284","creationTimestamp":"2023-10-24T19:20:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f8529e664309cdaf0d05b1249def38ec","kubernetes.io/config.mirror":"f8529e664309cdaf0d05b1249def38ec","kubernetes.io/config.seen":"2023-10-24T19:20:57.153466244Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T19:20:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1024 19:21:43.056977  565581 request.go:629] Waited for 197.36559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:43.057148  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-961484
	I1024 19:21:43.057171  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:43.057180  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:43.057186  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:43.060049  565581 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 19:21:43.060187  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:43.060206  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:43.060217  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:43.060225  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:43.060235  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:43.060245  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:43 GMT
	I1024 19:21:43.060253  565581 round_trippers.go:580]     Audit-Id: 6fabfdac-0ecb-4cae-b0b6-296e0b2d3a5d
	I1024 19:21:43.060509  565581 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T19:20:53Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I1024 19:21:43.060884  565581 pod_ready.go:92] pod "kube-scheduler-multinode-961484" in "kube-system" namespace has status "Ready":"True"
	I1024 19:21:43.060908  565581 pod_ready.go:81] duration metric: took 398.835822ms waiting for pod "kube-scheduler-multinode-961484" in "kube-system" namespace to be "Ready" ...
	I1024 19:21:43.060920  565581 pod_ready.go:38] duration metric: took 1.199834962s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:21:43.060940  565581 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:21:43.060994  565581 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:21:43.073714  565581 system_svc.go:56] duration metric: took 12.759427ms WaitForService to wait for kubelet.
	I1024 19:21:43.073757  565581 kubeadm.go:581] duration metric: took 2.744378052s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:21:43.073785  565581 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:21:43.257385  565581 request.go:629] Waited for 183.47805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1024 19:21:43.257602  565581 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1024 19:21:43.257631  565581 round_trippers.go:469] Request Headers:
	I1024 19:21:43.257651  565581 round_trippers.go:473]     Accept: application/json, */*
	I1024 19:21:43.257666  565581 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1024 19:21:43.261361  565581 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 19:21:43.261395  565581 round_trippers.go:577] Response Headers:
	I1024 19:21:43.261406  565581 round_trippers.go:580]     Content-Type: application/json
	I1024 19:21:43.261421  565581 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 14a1930d-7232-449a-8e2d-25b4a2f575eb
	I1024 19:21:43.261430  565581 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: cff629ed-a9c0-49eb-a2d7-44049180195a
	I1024 19:21:43.261440  565581 round_trippers.go:580]     Date: Tue, 24 Oct 2023 19:21:43 GMT
	I1024 19:21:43.261447  565581 round_trippers.go:580]     Audit-Id: 03d7f0e9-f802-4ecd-baba-2cbd0f996c38
	I1024 19:21:43.261453  565581 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 19:21:43.261641  565581 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"469"},"items":[{"metadata":{"name":"multinode-961484","uid":"5a17268c-274a-4846-954a-9a2654047308","resourceVersion":"380","creationTimestamp":"2023-10-24T19:20:53Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-961484","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-961484","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T19_20_58_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I1024 19:21:43.262134  565581 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:21:43.262151  565581 node_conditions.go:123] node cpu capacity is 8
	I1024 19:21:43.262160  565581 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:21:43.262164  565581 node_conditions.go:123] node cpu capacity is 8
	I1024 19:21:43.262168  565581 node_conditions.go:105] duration metric: took 188.378167ms to run NodePressure ...
	I1024 19:21:43.262180  565581 start.go:228] waiting for startup goroutines ...
	I1024 19:21:43.262204  565581 start.go:242] writing updated cluster config ...
	I1024 19:21:43.262494  565581 ssh_runner.go:195] Run: rm -f paused
	I1024 19:21:43.323408  565581 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:21:43.328056  565581 out.go:177] * Done! kubectl is now configured to use "multinode-961484" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:21:13 multinode-961484 crio[953]: time="2023-10-24 19:21:13.821843264Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6da828590077aedeffdb16071defb7db96b07c57c6e68cb8288229abac3f4979/merged/etc/passwd: no such file or directory"
	Oct 24 19:21:13 multinode-961484 crio[953]: time="2023-10-24 19:21:13.821888629Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6da828590077aedeffdb16071defb7db96b07c57c6e68cb8288229abac3f4979/merged/etc/group: no such file or directory"
	Oct 24 19:21:13 multinode-961484 crio[953]: time="2023-10-24 19:21:13.869686007Z" level=info msg="Created container d9f656b51d30e434116aa8f3db9e00a10ce3559b2d7646cdece170503429c48d: kube-system/storage-provisioner/storage-provisioner" id=52bc5cf7-7120-44c2-9e4b-081c3fc47733 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:21:13 multinode-961484 crio[953]: time="2023-10-24 19:21:13.870531287Z" level=info msg="Starting container: d9f656b51d30e434116aa8f3db9e00a10ce3559b2d7646cdece170503429c48d" id=3381821a-f334-4906-8719-e55c030eb880 name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:21:13 multinode-961484 crio[953]: time="2023-10-24 19:21:13.882250651Z" level=info msg="Started container" PID=2378 containerID=d9f656b51d30e434116aa8f3db9e00a10ce3559b2d7646cdece170503429c48d description=kube-system/storage-provisioner/storage-provisioner id=3381821a-f334-4906-8719-e55c030eb880 name=/runtime.v1.RuntimeService/StartContainer sandboxID=64c429e2c9174f7c43161364b7afedd9bb5047a99d04fa4bd060fd4f4dabc614
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.772530997Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-px9mp/POD" id=fde4d7cc-e2b9-43b7-baab-aa60bc10afbf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.772632175Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.789916141Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-px9mp Namespace:default ID:9d013b44320431e077a4277c194ca20ce415682ced32defcade9977600e5d2a8 UID:7ba48d8c-b0b5-4be7-a75d-c4425324fa52 NetNS:/var/run/netns/48c0b307-3eca-4bff-aa32-b0fde95571b3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.789964013Z" level=info msg="Adding pod default_busybox-5bc68d56bd-px9mp to CNI network \"kindnet\" (type=ptp)"
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.800396809Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-px9mp Namespace:default ID:9d013b44320431e077a4277c194ca20ce415682ced32defcade9977600e5d2a8 UID:7ba48d8c-b0b5-4be7-a75d-c4425324fa52 NetNS:/var/run/netns/48c0b307-3eca-4bff-aa32-b0fde95571b3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.800555911Z" level=info msg="Checking pod default_busybox-5bc68d56bd-px9mp for CNI network kindnet (type=ptp)"
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.831228314Z" level=info msg="Ran pod sandbox 9d013b44320431e077a4277c194ca20ce415682ced32defcade9977600e5d2a8 with infra container: default/busybox-5bc68d56bd-px9mp/POD" id=fde4d7cc-e2b9-43b7-baab-aa60bc10afbf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.832356830Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=e1c4ad85-7ed5-4a8d-ac44-a4aa49d2bb89 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.832632744Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=e1c4ad85-7ed5-4a8d-ac44-a4aa49d2bb89 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.833441943Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=08ded2c0-513b-4988-88fa-1ea653917a08 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:21:44 multinode-961484 crio[953]: time="2023-10-24 19:21:44.834595066Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.011524321Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.564680340Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=08ded2c0-513b-4988-88fa-1ea653917a08 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.566214032Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=5a60f0d0-5e82-4bbf-9e50-daca15a448b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.566988181Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5a60f0d0-5e82-4bbf-9e50-daca15a448b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.568268849Z" level=info msg="Creating container: default/busybox-5bc68d56bd-px9mp/busybox" id=a9e1612d-b60a-4760-954e-e21ec89e5cea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.568417461Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.660154916Z" level=info msg="Created container 03fb7a665cd7fa21438fbb5c91791749e56bf98187d8f29a3ad91e9352328f7a: default/busybox-5bc68d56bd-px9mp/busybox" id=a9e1612d-b60a-4760-954e-e21ec89e5cea name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.660970111Z" level=info msg="Starting container: 03fb7a665cd7fa21438fbb5c91791749e56bf98187d8f29a3ad91e9352328f7a" id=73eba0c0-75fd-4ea2-9cf5-57075496aacb name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:21:45 multinode-961484 crio[953]: time="2023-10-24 19:21:45.670556860Z" level=info msg="Started container" PID=2512 containerID=03fb7a665cd7fa21438fbb5c91791749e56bf98187d8f29a3ad91e9352328f7a description=default/busybox-5bc68d56bd-px9mp/busybox id=73eba0c0-75fd-4ea2-9cf5-57075496aacb name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d013b44320431e077a4277c194ca20ce415682ced32defcade9977600e5d2a8
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03fb7a665cd7f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   5 seconds ago       Running             busybox                   0                   9d013b4432043       busybox-5bc68d56bd-px9mp
	d9f656b51d30e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      36 seconds ago      Running             storage-provisioner       0                   64c429e2c9174       storage-provisioner
	6ac8850c30dfd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      37 seconds ago      Running             coredns                   0                   2627a18106fdc       coredns-5dd5756b68-wgdhw
	e23c8cd7cf14a       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      39 seconds ago      Running             kindnet-cni               0                   b7a08492732f7       kindnet-zgn88
	ba82c38bc2529       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf                                      39 seconds ago      Running             kube-proxy                0                   4f43a3018d845       kube-proxy-87vtd
	f47a657394e3b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      59 seconds ago      Running             etcd                      0                   eafbd327d2e53       etcd-multinode-961484
	11e909e2ea03c       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4                                      59 seconds ago      Running             kube-scheduler            0                   92206620f5ff3       kube-scheduler-multinode-961484
	a559128b74326       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3                                      59 seconds ago      Running             kube-controller-manager   0                   9a428061df67c       kube-controller-manager-multinode-961484
	978b5b41effe9       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076                                      59 seconds ago      Running             kube-apiserver            0                   4371c4e4b23c9       kube-apiserver-multinode-961484
	
	* 
	* ==> coredns [6ac8850c30dfd7c48f2a02063f3d7e56238b8934ad95ab7657bdb9b1bdb7c0c3] <==
	* [INFO] 10.244.1.2:53916 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000231787s
	[INFO] 10.244.0.3:57125 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119621s
	[INFO] 10.244.0.3:46577 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002247923s
	[INFO] 10.244.0.3:59112 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086335s
	[INFO] 10.244.0.3:55109 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069195s
	[INFO] 10.244.0.3:48578 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002192254s
	[INFO] 10.244.0.3:46171 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000084931s
	[INFO] 10.244.0.3:57239 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067198s
	[INFO] 10.244.0.3:58007 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072185s
	[INFO] 10.244.1.2:55150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144403s
	[INFO] 10.244.1.2:47783 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091958s
	[INFO] 10.244.1.2:39636 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090525s
	[INFO] 10.244.1.2:55756 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103629s
	[INFO] 10.244.0.3:53157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137242s
	[INFO] 10.244.0.3:54931 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00006806s
	[INFO] 10.244.0.3:42185 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088432s
	[INFO] 10.244.0.3:57439 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088854s
	[INFO] 10.244.1.2:49583 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142961s
	[INFO] 10.244.1.2:55586 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000188808s
	[INFO] 10.244.1.2:60954 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124796s
	[INFO] 10.244.1.2:39749 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099129s
	[INFO] 10.244.0.3:55289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137107s
	[INFO] 10.244.0.3:56111 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000067346s
	[INFO] 10.244.0.3:49611 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074137s
	[INFO] 10.244.0.3:39393 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142738s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-961484
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-961484
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=multinode-961484
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_20_58_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:20:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-961484
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:21:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:21:12 +0000   Tue, 24 Oct 2023 19:20:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:21:12 +0000   Tue, 24 Oct 2023 19:20:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:21:12 +0000   Tue, 24 Oct 2023 19:20:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:21:12 +0000   Tue, 24 Oct 2023 19:21:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-961484
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	System Info:
	  Machine ID:                 9ca968f7227e42b9b2f148a246d11bdb
	  System UUID:                65de404b-98cf-4d7f-9ac9-eed5ae4bc423
	  Boot ID:                    f78507ce-bb13-4a64-bee1-5d653b27f216
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-px9mp                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-wgdhw                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     40s
	  kube-system                 etcd-multinode-961484                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         55s
	  kube-system                 kindnet-zgn88                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      40s
	  kube-system                 kube-apiserver-multinode-961484             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-controller-manager-multinode-961484    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-proxy-87vtd                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kube-scheduler-multinode-961484             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 39s   kube-proxy       
	  Normal  Starting                 60s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s   kubelet          Node multinode-961484 status is now: NodeHasSufficientMemory
	  Normal  Starting                 53s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s   kubelet          Node multinode-961484 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s   kubelet          Node multinode-961484 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s   kubelet          Node multinode-961484 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s   node-controller  Node multinode-961484 event: Registered Node multinode-961484 in Controller
	  Normal  NodeReady                38s   kubelet          Node multinode-961484 status is now: NodeReady
	
	
	Name:               multinode-961484-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-961484-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-961484-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:21:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:21:41 +0000   Tue, 24 Oct 2023 19:21:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:21:41 +0000   Tue, 24 Oct 2023 19:21:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:21:41 +0000   Tue, 24 Oct 2023 19:21:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:21:41 +0000   Tue, 24 Oct 2023 19:21:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-961484-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	System Info:
	  Machine ID:                 f16d8b84996f48e9b9b6cb12b6a7fed9
	  System UUID:                2d212125-7cdf-4c2f-bc4a-95ee70a7b907
	  Boot ID:                    f78507ce-bb13-4a64-bee1-5d653b27f216
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-j2cch    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-qs5hv               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11s
	  kube-system                 kube-proxy-8dp8l            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  RegisteredNode           11s                node-controller  Node multinode-961484-m02 event: Registered Node multinode-961484-m02 in Controller
	  Normal  NodeHasSufficientMemory  11s (x5 over 12s)  kubelet          Node multinode-961484-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x5 over 12s)  kubelet          Node multinode-961484-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x5 over 12s)  kubelet          Node multinode-961484-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9s                 kubelet          Node multinode-961484-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.008410] FS-Cache: O-key=[8] 'dba20f0200000000'
	[  +0.004967] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.006633] FS-Cache: N-cookie d=00000000758e7ab6{9p.inode} n=000000005cf6e31b
	[  +0.008764] FS-Cache: N-key=[8] 'dba20f0200000000'
	[  +0.357518] FS-Cache: Duplicate cookie detected
	[  +0.004696] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.006742] FS-Cache: O-cookie d=00000000758e7ab6{9p.inode} n=00000000d264f8e9
	[  +0.007356] FS-Cache: O-key=[8] 'e2a20f0200000000'
	[  +0.004932] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.006567] FS-Cache: N-cookie d=00000000758e7ab6{9p.inode} n=000000001cfa9689
	[  +0.007381] FS-Cache: N-key=[8] 'e2a20f0200000000'
	[Oct24 19:12] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +1.019070] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +2.015758] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +4.255535] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000029] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[  +8.195184] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[Oct24 19:13] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	[ +32.764787] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 06 1f de 35 9e 73 56 90 d5 66 8e fc 08 00
	
	* 
	* ==> etcd [f47a657394e3baf6de448af91749693de8f5058be52d3ddf046f863e579ee299] <==
	* {"level":"info","ts":"2023-10-24T19:20:51.573946Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-24T19:20:51.577057Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T19:20:51.577318Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-24T19:20:51.577419Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-24T19:20:51.577583Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T19:20:51.57768Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:20:51.760456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-24T19:20:51.760509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-24T19:20:51.760526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-24T19:20:51.760554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-24T19:20:51.76056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-24T19:20:51.760569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-24T19:20:51.760577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-24T19:20:51.761975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:20:51.761972Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-961484 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:20:51.761995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:20:51.762362Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:20:51.762426Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:20:51.762018Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:20:51.763533Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:20:51.763548Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-24T19:20:51.763672Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:20:51.763769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:20:51.763807Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:21:30.146528Z","caller":"traceutil/trace.go:171","msg":"trace[325493382] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"142.084429ms","start":"2023-10-24T19:21:30.004429Z","end":"2023-10-24T19:21:30.146514Z","steps":["trace[325493382] 'process raft request'  (duration: 141.96028ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:21:50 up  3:04,  0 users,  load average: 1.43, 1.50, 1.23
	Linux multinode-961484 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [e23c8cd7cf14a93787855d981f31c9daec50a0b4dac9463555111ffed97a309e] <==
	* I1024 19:21:11.543115       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1024 19:21:11.543217       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1024 19:21:11.543412       1 main.go:116] setting mtu 1500 for CNI 
	I1024 19:21:11.543423       1 main.go:146] kindnetd IP family: "ipv4"
	I1024 19:21:11.543443       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1024 19:21:11.942397       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 19:21:11.942428       1 main.go:227] handling current node
	I1024 19:21:21.956552       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 19:21:21.956584       1 main.go:227] handling current node
	I1024 19:21:31.969703       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 19:21:31.969739       1 main.go:227] handling current node
	I1024 19:21:41.982590       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 19:21:41.982617       1 main.go:227] handling current node
	I1024 19:21:41.982630       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1024 19:21:41.982637       1 main.go:250] Node multinode-961484-m02 has CIDR [10.244.1.0/24] 
	I1024 19:21:41.982817       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [978b5b41effe96056c2d4b38df3bda868b88f2456201037bc80615dd06214def] <==
	* I1024 19:20:53.862576       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:20:53.863146       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 19:20:53.863185       1 aggregator.go:166] initial CRD sync complete...
	I1024 19:20:53.863194       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 19:20:53.863202       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 19:20:53.863223       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:20:53.864192       1 controller.go:624] quota admission added evaluator for: namespaces
	I1024 19:20:53.941187       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 19:20:53.952076       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1024 19:20:54.041373       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:20:54.766402       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1024 19:20:54.770634       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1024 19:20:54.770659       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 19:20:55.368181       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:20:55.415612       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1024 19:20:55.554773       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1024 19:20:55.561018       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1024 19:20:55.562138       1 controller.go:624] quota admission added evaluator for: endpoints
	I1024 19:20:55.566476       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 19:20:55.793158       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1024 19:20:57.069765       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1024 19:20:57.082206       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1024 19:20:57.091238       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1024 19:21:10.060267       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1024 19:21:10.063101       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [a559128b7432614159554e75976a7fb958d7390425d3400d45cc5c179499b1fc] <==
	* I1024 19:21:12.429814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.745µs"
	I1024 19:21:12.462024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="115.264µs"
	I1024 19:21:13.368570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.122µs"
	I1024 19:21:14.975907       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1024 19:21:22.757887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.572161ms"
	I1024 19:21:22.758167       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="208.493µs"
	I1024 19:21:39.709343       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-961484-m02\" does not exist"
	I1024 19:21:39.718228       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-961484-m02" podCIDRs=["10.244.1.0/24"]
	I1024 19:21:39.721021       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qs5hv"
	I1024 19:21:39.721393       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8dp8l"
	I1024 19:21:39.978778       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-961484-m02"
	I1024 19:21:39.978803       1 event.go:307] "Event occurred" object="multinode-961484-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-961484-m02 event: Registered Node multinode-961484-m02 in Controller"
	I1024 19:21:41.788718       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-961484-m02"
	I1024 19:21:44.139208       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1024 19:21:44.151129       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-j2cch"
	I1024 19:21:44.162165       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-px9mp"
	I1024 19:21:44.170048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="30.996288ms"
	I1024 19:21:44.181093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.969874ms"
	I1024 19:21:44.181444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="95.365µs"
	I1024 19:21:44.185077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="112.762µs"
	I1024 19:21:44.989180       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-j2cch" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-j2cch"
	I1024 19:21:46.295119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.024194ms"
	I1024 19:21:46.295361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="151.315µs"
	I1024 19:21:46.452934       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.228141ms"
	I1024 19:21:46.453025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.135µs"
	
	* 
	* ==> kube-proxy [ba82c38bc252967ae416319666b6f24ab7ddc3a3ea695ca8a28c8b38f2f496b4] <==
	* I1024 19:21:11.473034       1 server_others.go:69] "Using iptables proxy"
	I1024 19:21:11.483362       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1024 19:21:11.550586       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:21:11.553138       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:21:11.553179       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:21:11.553186       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:21:11.553223       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:21:11.553483       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:21:11.553512       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:21:11.554574       1 config.go:188] "Starting service config controller"
	I1024 19:21:11.554624       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:21:11.554574       1 config.go:315] "Starting node config controller"
	I1024 19:21:11.554670       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:21:11.554553       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:21:11.554742       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:21:11.655685       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:21:11.655704       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:21:11.655671       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [11e909e2ea03cf9016d6a0f334a83c22db39b4b9cbcd71666abeae331fc700c8] <==
	* W1024 19:20:53.959253       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:20:53.959268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1024 19:20:53.959692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:20:53.959728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1024 19:20:53.973022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:20:53.973081       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:20:54.770978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:20:54.771027       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1024 19:20:54.840705       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:20:54.840737       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:20:54.869577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:20:54.869616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 19:20:54.907235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:20:54.907276       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1024 19:20:54.930830       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:20:54.930862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1024 19:20:54.938482       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1024 19:20:54.938522       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1024 19:20:54.987945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:20:54.987998       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 19:20:54.999331       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:20:54.999370       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1024 19:20:55.121213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:20:55.121253       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1024 19:20:58.048074       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 24 19:21:10 multinode-961484 kubelet[1589]: I1024 19:21:10.444883    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmrx\" (UniqueName: \"kubernetes.io/projected/dfc38cf1-7c84-476c-a1c6-dd1c81356cdb-kube-api-access-qlmrx\") pod \"kube-proxy-87vtd\" (UID: \"dfc38cf1-7c84-476c-a1c6-dd1c81356cdb\") " pod="kube-system/kube-proxy-87vtd"
	Oct 24 19:21:10 multinode-961484 kubelet[1589]: I1024 19:21:10.444911    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a26cc577-13fe-45ab-9899-365498d67e7e-xtables-lock\") pod \"kindnet-zgn88\" (UID: \"a26cc577-13fe-45ab-9899-365498d67e7e\") " pod="kube-system/kindnet-zgn88"
	Oct 24 19:21:10 multinode-961484 kubelet[1589]: I1024 19:21:10.444947    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a26cc577-13fe-45ab-9899-365498d67e7e-cni-cfg\") pod \"kindnet-zgn88\" (UID: \"a26cc577-13fe-45ab-9899-365498d67e7e\") " pod="kube-system/kindnet-zgn88"
	Oct 24 19:21:10 multinode-961484 kubelet[1589]: I1024 19:21:10.444974    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wksw9\" (UniqueName: \"kubernetes.io/projected/a26cc577-13fe-45ab-9899-365498d67e7e-kube-api-access-wksw9\") pod \"kindnet-zgn88\" (UID: \"a26cc577-13fe-45ab-9899-365498d67e7e\") " pod="kube-system/kindnet-zgn88"
	Oct 24 19:21:10 multinode-961484 kubelet[1589]: I1024 19:21:10.445003    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dfc38cf1-7c84-476c-a1c6-dd1c81356cdb-xtables-lock\") pod \"kube-proxy-87vtd\" (UID: \"dfc38cf1-7c84-476c-a1c6-dd1c81356cdb\") " pod="kube-system/kube-proxy-87vtd"
	Oct 24 19:21:10 multinode-961484 kubelet[1589]: I1024 19:21:10.445036    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dfc38cf1-7c84-476c-a1c6-dd1c81356cdb-lib-modules\") pod \"kube-proxy-87vtd\" (UID: \"dfc38cf1-7c84-476c-a1c6-dd1c81356cdb\") " pod="kube-system/kube-proxy-87vtd"
	Oct 24 19:21:10 multinode-961484 kubelet[1589]: W1024 19:21:10.941191    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/crio-4f43a3018d8459d51533ecc5e22d21b078c5114f17998d33585fb7cd64d72c59 WatchSource:0}: Error finding container 4f43a3018d8459d51533ecc5e22d21b078c5114f17998d33585fb7cd64d72c59: Status 404 returned error can't find the container with id 4f43a3018d8459d51533ecc5e22d21b078c5114f17998d33585fb7cd64d72c59
	Oct 24 19:21:10 multinode-961484 kubelet[1589]: W1024 19:21:10.965776    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/crio-b7a08492732f78a835461aec3144d24f455e8ca9cce34ea2ec06148f84f070db WatchSource:0}: Error finding container b7a08492732f78a835461aec3144d24f455e8ca9cce34ea2ec06148f84f070db: Status 404 returned error can't find the container with id b7a08492732f78a835461aec3144d24f455e8ca9cce34ea2ec06148f84f070db
	Oct 24 19:21:12 multinode-961484 kubelet[1589]: I1024 19:21:12.364822    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zgn88" podStartSLOduration=2.364743919 podCreationTimestamp="2023-10-24 19:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:21:12.364544335 +0000 UTC m=+15.322458523" watchObservedRunningTime="2023-10-24 19:21:12.364743919 +0000 UTC m=+15.322658111"
	Oct 24 19:21:12 multinode-961484 kubelet[1589]: I1024 19:21:12.379029    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-87vtd" podStartSLOduration=2.378974875 podCreationTimestamp="2023-10-24 19:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:21:12.378766982 +0000 UTC m=+15.336681193" watchObservedRunningTime="2023-10-24 19:21:12.378974875 +0000 UTC m=+15.336889063"
	Oct 24 19:21:12 multinode-961484 kubelet[1589]: I1024 19:21:12.397234    1589 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 24 19:21:12 multinode-961484 kubelet[1589]: I1024 19:21:12.429884    1589 topology_manager.go:215] "Topology Admit Handler" podUID="fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7" podNamespace="kube-system" podName="coredns-5dd5756b68-wgdhw"
	Oct 24 19:21:12 multinode-961484 kubelet[1589]: I1024 19:21:12.559346    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7-config-volume\") pod \"coredns-5dd5756b68-wgdhw\" (UID: \"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7\") " pod="kube-system/coredns-5dd5756b68-wgdhw"
	Oct 24 19:21:12 multinode-961484 kubelet[1589]: I1024 19:21:12.559526    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q776\" (UniqueName: \"kubernetes.io/projected/fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7-kube-api-access-6q776\") pod \"coredns-5dd5756b68-wgdhw\" (UID: \"fb1ef906-d2ec-40d7-8dd7-56ca7667d0d7\") " pod="kube-system/coredns-5dd5756b68-wgdhw"
	Oct 24 19:21:12 multinode-961484 kubelet[1589]: W1024 19:21:12.785575    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/crio-2627a18106fdc1fa54eb6f264de789c2c70647a33cb4b0efe2da6c19178d2da7 WatchSource:0}: Error finding container 2627a18106fdc1fa54eb6f264de789c2c70647a33cb4b0efe2da6c19178d2da7: Status 404 returned error can't find the container with id 2627a18106fdc1fa54eb6f264de789c2c70647a33cb4b0efe2da6c19178d2da7
	Oct 24 19:21:13 multinode-961484 kubelet[1589]: I1024 19:21:13.164025    1589 topology_manager.go:215] "Topology Admit Handler" podUID="6ae1e99e-0a67-49f4-b89b-b708d36767cb" podNamespace="kube-system" podName="storage-provisioner"
	Oct 24 19:21:13 multinode-961484 kubelet[1589]: I1024 19:21:13.364196    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2stj\" (UniqueName: \"kubernetes.io/projected/6ae1e99e-0a67-49f4-b89b-b708d36767cb-kube-api-access-q2stj\") pod \"storage-provisioner\" (UID: \"6ae1e99e-0a67-49f4-b89b-b708d36767cb\") " pod="kube-system/storage-provisioner"
	Oct 24 19:21:13 multinode-961484 kubelet[1589]: I1024 19:21:13.364304    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6ae1e99e-0a67-49f4-b89b-b708d36767cb-tmp\") pod \"storage-provisioner\" (UID: \"6ae1e99e-0a67-49f4-b89b-b708d36767cb\") " pod="kube-system/storage-provisioner"
	Oct 24 19:21:13 multinode-961484 kubelet[1589]: I1024 19:21:13.368524    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wgdhw" podStartSLOduration=3.368476508 podCreationTimestamp="2023-10-24 19:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:21:13.368315546 +0000 UTC m=+16.326229750" watchObservedRunningTime="2023-10-24 19:21:13.368476508 +0000 UTC m=+16.326390696"
	Oct 24 19:21:13 multinode-961484 kubelet[1589]: W1024 19:21:13.805811    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/crio-64c429e2c9174f7c43161364b7afedd9bb5047a99d04fa4bd060fd4f4dabc614 WatchSource:0}: Error finding container 64c429e2c9174f7c43161364b7afedd9bb5047a99d04fa4bd060fd4f4dabc614: Status 404 returned error can't find the container with id 64c429e2c9174f7c43161364b7afedd9bb5047a99d04fa4bd060fd4f4dabc614
	Oct 24 19:21:14 multinode-961484 kubelet[1589]: I1024 19:21:14.370808    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.370754128 podCreationTimestamp="2023-10-24 19:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 19:21:14.370694551 +0000 UTC m=+17.328608739" watchObservedRunningTime="2023-10-24 19:21:14.370754128 +0000 UTC m=+17.328668315"
	Oct 24 19:21:44 multinode-961484 kubelet[1589]: I1024 19:21:44.170663    1589 topology_manager.go:215] "Topology Admit Handler" podUID="7ba48d8c-b0b5-4be7-a75d-c4425324fa52" podNamespace="default" podName="busybox-5bc68d56bd-px9mp"
	Oct 24 19:21:44 multinode-961484 kubelet[1589]: I1024 19:21:44.368425    1589 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr28b\" (UniqueName: \"kubernetes.io/projected/7ba48d8c-b0b5-4be7-a75d-c4425324fa52-kube-api-access-nr28b\") pod \"busybox-5bc68d56bd-px9mp\" (UID: \"7ba48d8c-b0b5-4be7-a75d-c4425324fa52\") " pod="default/busybox-5bc68d56bd-px9mp"
	Oct 24 19:21:44 multinode-961484 kubelet[1589]: W1024 19:21:44.829882    1589 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/crio-9d013b44320431e077a4277c194ca20ce415682ced32defcade9977600e5d2a8 WatchSource:0}: Error finding container 9d013b44320431e077a4277c194ca20ce415682ced32defcade9977600e5d2a8: Status 404 returned error can't find the container with id 9d013b44320431e077a4277c194ca20ce415682ced32defcade9977600e5d2a8
	Oct 24 19:21:46 multinode-961484 kubelet[1589]: I1024 19:21:46.445163    1589 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-px9mp" podStartSLOduration=1.7124605800000001 podCreationTimestamp="2023-10-24 19:21:44 +0000 UTC" firstStartedPulling="2023-10-24 19:21:44.83285431 +0000 UTC m=+47.790768481" lastFinishedPulling="2023-10-24 19:21:45.565467286 +0000 UTC m=+48.523381466" observedRunningTime="2023-10-24 19:21:46.444886353 +0000 UTC m=+49.402800559" watchObservedRunningTime="2023-10-24 19:21:46.445073565 +0000 UTC m=+49.402987753"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-961484 -n multinode-961484
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-961484 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.534540438.exe start -p running-upgrade-597198 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.534540438.exe start -p running-upgrade-597198 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m4.324613112s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-597198 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-597198 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.602481958s)

                                                
                                                
-- stdout --
	* [running-upgrade-597198] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-597198 in cluster running-upgrade-597198
	* Pulling base image ...
	* Updating the running docker "running-upgrade-597198" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:35:19.827397  658927 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:35:19.827561  658927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:35:19.827571  658927 out.go:309] Setting ErrFile to fd 2...
	I1024 19:35:19.827576  658927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:35:19.827778  658927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:35:19.828414  658927 out.go:303] Setting JSON to false
	I1024 19:35:19.830022  658927 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11867,"bootTime":1698164253,"procs":491,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:35:19.830122  658927 start.go:138] virtualization: kvm guest
	I1024 19:35:19.833431  658927 out.go:177] * [running-upgrade-597198] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:35:19.836054  658927 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:35:19.836100  658927 notify.go:220] Checking for updates...
	I1024 19:35:19.838493  658927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:35:19.842282  658927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:35:19.844941  658927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:35:19.847033  658927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:35:19.849206  658927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:35:19.851798  658927 config.go:182] Loaded profile config "running-upgrade-597198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1024 19:35:19.851852  658927 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:35:19.855100  658927 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 19:35:19.862633  658927 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:35:19.898777  658927 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:35:19.898888  658927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:35:19.967139  658927 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:72 SystemTime:2023-10-24 19:35:19.956264511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:35:19.967308  658927 docker.go:295] overlay module found
	I1024 19:35:19.969736  658927 out.go:177] * Using the docker driver based on existing profile
	I1024 19:35:19.972187  658927 start.go:298] selected driver: docker
	I1024 19:35:19.972215  658927 start.go:902] validating driver "docker" against &{Name:running-upgrade-597198 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-597198 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 19:35:19.972340  658927 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:35:19.973686  658927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:35:20.047248  658927 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:72 SystemTime:2023-10-24 19:35:20.036545361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:35:20.047575  658927 cni.go:84] Creating CNI manager for ""
	I1024 19:35:20.047591  658927 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1024 19:35:20.047597  658927 start_flags.go:323] config:
	{Name:running-upgrade-597198 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-597198 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1024 19:35:20.051543  658927 out.go:177] * Starting control plane node running-upgrade-597198 in cluster running-upgrade-597198
	I1024 19:35:20.053538  658927 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:35:20.055685  658927 out.go:177] * Pulling base image ...
	I1024 19:35:20.057376  658927 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1024 19:35:20.057653  658927 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:35:20.079511  658927 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:35:20.079565  658927 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	W1024 19:35:20.089814  658927 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1024 19:35:20.090036  658927 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/running-upgrade-597198/config.json ...
	I1024 19:35:20.090340  658927 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:35:20.090428  658927 start.go:365] acquiring machines lock for running-upgrade-597198: {Name:mkb54ccb54bc7a572900a142c50757116fbdeb99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090534  658927 start.go:369] acquired machines lock for "running-upgrade-597198" in 81.375µs
	I1024 19:35:20.090552  658927 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:35:20.090563  658927 fix.go:54] fixHost starting: m01
	I1024 19:35:20.090632  658927 cache.go:107] acquiring lock: {Name:mk23591311b66e09432581f0a19b8da3091dab5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090657  658927 cache.go:107] acquiring lock: {Name:mkba986068afe766ac334fbe160e3814ab7891b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090685  658927 cache.go:107] acquiring lock: {Name:mkd6855bcb3baa0e93743901014f291bdd3cbc43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090721  658927 cache.go:107] acquiring lock: {Name:mkc95ce00f0f8c85afffc05055540eca2dd57a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090739  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1024 19:35:20.090766  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1024 19:35:20.090757  658927 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 99.388µs
	I1024 19:35:20.090762  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1024 19:35:20.090794  658927 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 111.043µs
	I1024 19:35:20.090809  658927 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1024 19:35:20.090775  658927 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 55.216µs
	I1024 19:35:20.090795  658927 cache.go:107] acquiring lock: {Name:mkf4f49ed588f83f1b43655ed43c3b2f05a45b0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090821  658927 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1024 19:35:20.090783  658927 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1024 19:35:20.090654  658927 cache.go:107] acquiring lock: {Name:mkedfe4052e36dadf4a1c1af4d0ee5eff8bce76d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090847  658927 cli_runner.go:164] Run: docker container inspect running-upgrade-597198 --format={{.State.Status}}
	I1024 19:35:20.090834  658927 cache.go:107] acquiring lock: {Name:mkbfc817fafb929dc311e2d385675baff7f2160b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090874  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1024 19:35:20.090862  658927 cache.go:107] acquiring lock: {Name:mk02d5bf68ddbbe6d5175e062ed9e7cc8f94d51b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:20.090883  658927 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 241.433µs
	I1024 19:35:20.090893  658927 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1024 19:35:20.090901  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1024 19:35:20.090908  658927 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 48.097µs
	I1024 19:35:20.090919  658927 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1024 19:35:20.090954  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1024 19:35:20.090734  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 19:35:20.090972  658927 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 177.992µs
	I1024 19:35:20.090982  658927 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1024 19:35:20.090984  658927 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 363.096µs
	I1024 19:35:20.090995  658927 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 19:35:20.090848  658927 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1024 19:35:20.091011  658927 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 218.226µs
	I1024 19:35:20.091023  658927 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1024 19:35:20.091032  658927 cache.go:87] Successfully saved all images to host disk.
	I1024 19:35:20.110156  658927 fix.go:102] recreateIfNeeded on running-upgrade-597198: state=Running err=<nil>
	W1024 19:35:20.110187  658927 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:35:20.112971  658927 out.go:177] * Updating the running docker "running-upgrade-597198" container ...
	I1024 19:35:20.115684  658927 machine.go:88] provisioning docker machine ...
	I1024 19:35:20.115753  658927 ubuntu.go:169] provisioning hostname "running-upgrade-597198"
	I1024 19:35:20.115825  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:20.136588  658927 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:20.137117  658927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I1024 19:35:20.137141  658927 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-597198 && echo "running-upgrade-597198" | sudo tee /etc/hostname
	I1024 19:35:20.274584  658927 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-597198
	
	I1024 19:35:20.274679  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:20.297563  658927 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:20.297964  658927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I1024 19:35:20.297985  658927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-597198' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-597198/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-597198' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:35:20.413827  658927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:35:20.413879  658927 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-471553/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-471553/.minikube}
	I1024 19:35:20.413916  658927 ubuntu.go:177] setting up certificates
	I1024 19:35:20.413940  658927 provision.go:83] configureAuth start
	I1024 19:35:20.414058  658927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-597198
	I1024 19:35:20.435818  658927 provision.go:138] copyHostCerts
	I1024 19:35:20.435897  658927 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem, removing ...
	I1024 19:35:20.435927  658927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:35:20.436012  658927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem (1082 bytes)
	I1024 19:35:20.436112  658927 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem, removing ...
	I1024 19:35:20.436124  658927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:35:20.436151  658927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem (1123 bytes)
	I1024 19:35:20.436276  658927 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem, removing ...
	I1024 19:35:20.436288  658927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:35:20.436310  658927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem (1675 bytes)
	I1024 19:35:20.436366  658927 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-597198 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-597198]
	I1024 19:35:20.584085  658927 provision.go:172] copyRemoteCerts
	I1024 19:35:20.584198  658927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:35:20.584250  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:20.606802  658927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/running-upgrade-597198/id_rsa Username:docker}
	I1024 19:35:20.698426  658927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:35:20.723755  658927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 19:35:20.749308  658927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:35:20.776000  658927 provision.go:86] duration metric: configureAuth took 362.032364ms
	I1024 19:35:20.776101  658927 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:35:20.776342  658927 config.go:182] Loaded profile config "running-upgrade-597198": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1024 19:35:20.776466  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:20.802878  658927 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:20.803347  658927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33392 <nil> <nil>}
	I1024 19:35:20.803373  658927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:35:21.294653  658927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:35:21.294684  658927 machine.go:91] provisioned docker machine in 1.178964045s
	I1024 19:35:21.294700  658927 start.go:300] post-start starting for "running-upgrade-597198" (driver="docker")
	I1024 19:35:21.294717  658927 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:35:21.294799  658927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:35:21.294874  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:21.319575  658927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/running-upgrade-597198/id_rsa Username:docker}
	I1024 19:35:21.414846  658927 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:35:21.420167  658927 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:35:21.420199  658927 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:35:21.420212  658927 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:35:21.420221  658927 info.go:137] Remote host: Ubuntu 19.10
	I1024 19:35:21.420237  658927 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/addons for local assets ...
	I1024 19:35:21.420322  658927 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/files for local assets ...
	I1024 19:35:21.420440  658927 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> 4783232.pem in /etc/ssl/certs
	I1024 19:35:21.420541  658927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:35:21.434741  658927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:35:21.460475  658927 start.go:303] post-start completed in 165.754637ms
	I1024 19:35:21.460561  658927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:35:21.460594  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:21.490495  658927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/running-upgrade-597198/id_rsa Username:docker}
	I1024 19:35:21.581758  658927 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:35:21.586723  658927 fix.go:56] fixHost completed within 1.496152544s
	I1024 19:35:21.586750  658927 start.go:83] releasing machines lock for "running-upgrade-597198", held for 1.496202852s
	I1024 19:35:21.586834  658927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-597198
	I1024 19:35:21.608407  658927 ssh_runner.go:195] Run: cat /version.json
	I1024 19:35:21.608461  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:21.608515  658927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:35:21.608595  658927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-597198
	I1024 19:35:21.637622  658927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/running-upgrade-597198/id_rsa Username:docker}
	I1024 19:35:21.642738  658927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33392 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/running-upgrade-597198/id_rsa Username:docker}
	W1024 19:35:21.758918  658927 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 19:35:21.759019  658927 ssh_runner.go:195] Run: systemctl --version
	I1024 19:35:21.764865  658927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:35:21.850796  658927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:35:21.856323  658927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:35:21.878482  658927 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:35:21.878571  658927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:35:21.912486  658927 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:35:21.912526  658927 start.go:472] detecting cgroup driver to use...
	I1024 19:35:21.912567  658927 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:35:21.912631  658927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:35:21.947767  658927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:35:21.962747  658927 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:35:21.962818  658927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:35:21.978342  658927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:35:21.993240  658927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1024 19:35:22.005782  658927 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1024 19:35:22.005870  658927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:35:22.112440  658927 docker.go:214] disabling docker service ...
	I1024 19:35:22.112508  658927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:35:22.125802  658927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:35:22.138347  658927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:35:22.222609  658927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:35:22.299937  658927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:35:22.312674  658927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:35:22.329755  658927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 19:35:22.329881  658927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:35:22.343269  658927 out.go:177] 
	W1024 19:35:22.345171  658927 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1024 19:35:22.345307  658927 out.go:239] * 
	* 
	W1024 19:35:22.346321  658927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 19:35:22.347926  658927 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-597198 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-24 19:35:22.369357135 +0000 UTC m=+2085.206901805
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-597198
helpers_test.go:235: (dbg) docker inspect running-upgrade-597198:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbf15ee1f6652867ac2b17083583ef7694f682c2c8a11aafb169cba380abb022",
	        "Created": "2023-10-24T19:34:15.825725249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 647548,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:34:16.640478688Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/cbf15ee1f6652867ac2b17083583ef7694f682c2c8a11aafb169cba380abb022/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbf15ee1f6652867ac2b17083583ef7694f682c2c8a11aafb169cba380abb022/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbf15ee1f6652867ac2b17083583ef7694f682c2c8a11aafb169cba380abb022/hosts",
	        "LogPath": "/var/lib/docker/containers/cbf15ee1f6652867ac2b17083583ef7694f682c2c8a11aafb169cba380abb022/cbf15ee1f6652867ac2b17083583ef7694f682c2c8a11aafb169cba380abb022-json.log",
	        "Name": "/running-upgrade-597198",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-597198:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5edf9d773488e1bd6129ddb847c0ac9bfd381cddaf1b43503c72a6551cad5de1-init/diff:/var/lib/docker/overlay2/6e2da29a90900f2f51715d467492dfef4c42d5f987b91e8480cd94100ca432ed/diff:/var/lib/docker/overlay2/84b33924dfa495c6ba9fd97043afd5c777d4bde3bcb0da2e5a1df07b9b9d6bbe/diff:/var/lib/docker/overlay2/819ca6473ba6d90a34ba2c66cee8604011688435a6906cb333a2f198cd6043af/diff:/var/lib/docker/overlay2/4c0fadbf16f12e3d46e1d0f47a18f0b6acecc20d1793d39987ab7d5883a65604/diff:/var/lib/docker/overlay2/ad451377a1afde15d058b76f9ffc6acfad0cf637766aa34f9758fd47b919d93d/diff:/var/lib/docker/overlay2/18814b6f6447e5966059508f968fcea09050066160193cf7de49abdc36becf1b/diff:/var/lib/docker/overlay2/80b4679c920605f7a7e6f02dc9d8e15301aa5c2375583e9e609bf41ebca74380/diff:/var/lib/docker/overlay2/bab922ac3f31e70ef2a55580916905b7bdf47e24107e77a985ad824320052517/diff:/var/lib/docker/overlay2/2d85140a170d189e03e795c7c34170a35f0e4390e05834f65dc00e5cb0aba738/diff:/var/lib/docker/overlay2/8ee9db
9ed6175ab25ee5aace86272bc1d3f605404b16041bcae7415cf4aed953/diff:/var/lib/docker/overlay2/8f79d7bd827239cbca891dba613030dece08adc4ee4846397cbb31beb978bf41/diff:/var/lib/docker/overlay2/47b0500feaa12e1373ae892d7d7656b1387f984c851adce12166ebe22e891618/diff:/var/lib/docker/overlay2/db3e067816cf8c7589078ee8cdd63078b9c87ca08aa537c5c99da0143728268a/diff:/var/lib/docker/overlay2/c2f5f8f130504c4591fe816e5bd8b93c2d73d7bfeea08675c23adc3ad8f6b8dd/diff:/var/lib/docker/overlay2/51fd06f517e9ce65b16a1763fc2a420dc9e5d50329dc0dbe9c63ee638da840a9/diff:/var/lib/docker/overlay2/148b45c2b3b3c92266666d918d43c7ec76d6f655e4f5bd193343c8f8c33ea583/diff:/var/lib/docker/overlay2/2c4dfd032bfdf18cce3c6f0b3b39ab77562023aa988a8db27ab51e2982cfdaf3/diff:/var/lib/docker/overlay2/4a428e81a05979bf08ef43c7ea8a27acc18451082e0ba0a67ccf45d3ddfba6ce/diff:/var/lib/docker/overlay2/d15c08093c198b9750ab21095fa18eafaf371d5cf8594ab376efe4adcf40aa2f/diff:/var/lib/docker/overlay2/a50542bfe83b077990b8b75a399708ae6c434d44ea52c32fbb3b664e201ae4f4/diff:/var/lib/d
ocker/overlay2/bedb37d3a87779cc889db1c323527aa3fd333cd6ec6d6960e561342e911cddb5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5edf9d773488e1bd6129ddb847c0ac9bfd381cddaf1b43503c72a6551cad5de1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5edf9d773488e1bd6129ddb847c0ac9bfd381cddaf1b43503c72a6551cad5de1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5edf9d773488e1bd6129ddb847c0ac9bfd381cddaf1b43503c72a6551cad5de1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-597198",
	                "Source": "/var/lib/docker/volumes/running-upgrade-597198/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-597198",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-597198",
	                "name.minikube.sigs.k8s.io": "running-upgrade-597198",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39008d3ca05b0028492f0d598206ae68d480e18f61c10066cf6952b33c555c7f",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/39008d3ca05b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "4cde4ecada6b11d3013a2c16c27b53cc777faccd17059477a86e966af633c7d4",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "379071ce83fb53a474e678e42d3457321c68fc23670ba2140cc9e9628ef86432",
	                    "EndpointID": "4cde4ecada6b11d3013a2c16c27b53cc777faccd17059477a86e966af633c7d4",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-597198 -n running-upgrade-597198
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-597198 -n running-upgrade-597198: exit status 4 (341.837397ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:35:22.693619  660211 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-597198" does not appear in /home/jenkins/minikube-integration/17485-471553/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-597198" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-597198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-597198
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-597198: (2.057801034s)
--- FAIL: TestRunningBinaryUpgrade (69.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.4067098893.exe start -p stopped-upgrade-878231 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1024 19:33:18.869493  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.4067098893.exe start -p stopped-upgrade-878231 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.554497629s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.4067098893.exe -p stopped-upgrade-878231 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.4067098893.exe -p stopped-upgrade-878231 stop: (1.118908948s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-878231 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-878231 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.377022188s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-878231] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-878231 in cluster stopped-upgrade-878231
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-878231" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:34:05.876738  645550 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:34:05.876960  645550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:34:05.876971  645550 out.go:309] Setting ErrFile to fd 2...
	I1024 19:34:05.876978  645550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:34:05.877244  645550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:34:05.877941  645550 out.go:303] Setting JSON to false
	I1024 19:34:05.879883  645550 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11793,"bootTime":1698164253,"procs":472,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:34:05.879970  645550 start.go:138] virtualization: kvm guest
	I1024 19:34:05.883225  645550 out.go:177] * [stopped-upgrade-878231] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:34:05.885696  645550 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:34:05.885756  645550 notify.go:220] Checking for updates...
	I1024 19:34:05.887956  645550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:34:05.890127  645550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:34:05.892256  645550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:34:05.894428  645550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:34:05.896742  645550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:34:05.899562  645550 config.go:182] Loaded profile config "stopped-upgrade-878231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1024 19:34:05.899615  645550 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:34:05.901900  645550 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 19:34:05.903497  645550 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:34:05.928531  645550 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:34:05.928660  645550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:34:06.018830  645550 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:81 SystemTime:2023-10-24 19:34:06.006102898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:34:06.018979  645550 docker.go:295] overlay module found
	I1024 19:34:06.021323  645550 out.go:177] * Using the docker driver based on existing profile
	I1024 19:34:06.023011  645550 start.go:298] selected driver: docker
	I1024 19:34:06.023048  645550 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-878231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-878231 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 19:34:06.023190  645550 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:34:06.024132  645550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:34:06.103715  645550 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:81 SystemTime:2023-10-24 19:34:06.093603333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:34:06.104163  645550 cni.go:84] Creating CNI manager for ""
	I1024 19:34:06.104194  645550 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1024 19:34:06.104212  645550 start_flags.go:323] config:
	{Name:stopped-upgrade-878231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-878231 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s GPUs:}
	I1024 19:34:06.107081  645550 out.go:177] * Starting control plane node stopped-upgrade-878231 in cluster stopped-upgrade-878231
	I1024 19:34:06.109097  645550 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:34:06.111096  645550 out.go:177] * Pulling base image ...
	I1024 19:34:06.112985  645550 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I1024 19:34:06.113031  645550 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:34:06.133295  645550 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:34:06.133338  645550 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	W1024 19:34:06.144528  645550 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1024 19:34:06.144706  645550 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/stopped-upgrade-878231/config.json ...
	I1024 19:34:06.145012  645550 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:34:06.145057  645550 start.go:365] acquiring machines lock for stopped-upgrade-878231: {Name:mkc3b8fb2c1a945ae8a964a5826a6af1ad9653a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.145166  645550 start.go:369] acquired machines lock for "stopped-upgrade-878231" in 74.788µs
	I1024 19:34:06.145183  645550 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:34:06.145191  645550 fix.go:54] fixHost starting: m01
	I1024 19:34:06.145469  645550 cli_runner.go:164] Run: docker container inspect stopped-upgrade-878231 --format={{.State.Status}}
	I1024 19:34:06.145864  645550 cache.go:107] acquiring lock: {Name:mk23591311b66e09432581f0a19b8da3091dab5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.145894  645550 cache.go:107] acquiring lock: {Name:mkedfe4052e36dadf4a1c1af4d0ee5eff8bce76d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.145911  645550 cache.go:107] acquiring lock: {Name:mkf4f49ed588f83f1b43655ed43c3b2f05a45b0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.145949  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 19:34:06.145959  645550 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.746µs
	I1024 19:34:06.145975  645550 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 19:34:06.145984  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1024 19:34:06.146015  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I1024 19:34:06.146000  645550 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 90.45µs
	I1024 19:34:06.146031  645550 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1024 19:34:06.146028  645550 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 147.584µs
	I1024 19:34:06.146041  645550 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I1024 19:34:06.146049  645550 cache.go:107] acquiring lock: {Name:mkbfc817fafb929dc311e2d385675baff7f2160b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.146089  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I1024 19:34:06.146097  645550 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 51.137µs
	I1024 19:34:06.146106  645550 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I1024 19:34:06.146092  645550 cache.go:107] acquiring lock: {Name:mkba986068afe766ac334fbe160e3814ab7891b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.146092  645550 cache.go:107] acquiring lock: {Name:mk02d5bf68ddbbe6d5175e062ed9e7cc8f94d51b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.146175  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I1024 19:34:06.146208  645550 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 155.826µs
	I1024 19:34:06.146217  645550 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I1024 19:34:06.146224  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I1024 19:34:06.146233  645550 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 191.822µs
	I1024 19:34:06.146250  645550 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I1024 19:34:06.146279  645550 cache.go:107] acquiring lock: {Name:mkc95ce00f0f8c85afffc05055540eca2dd57a86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.146335  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I1024 19:34:06.146355  645550 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 398.481µs
	I1024 19:34:06.146363  645550 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I1024 19:34:06.146475  645550 cache.go:107] acquiring lock: {Name:mkd6855bcb3baa0e93743901014f291bdd3cbc43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:34:06.146571  645550 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I1024 19:34:06.146585  645550 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 155.742µs
	I1024 19:34:06.146598  645550 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I1024 19:34:06.146605  645550 cache.go:87] Successfully saved all images to host disk.
	I1024 19:34:06.171509  645550 fix.go:102] recreateIfNeeded on stopped-upgrade-878231: state=Stopped err=<nil>
	W1024 19:34:06.171562  645550 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:34:06.176120  645550 out.go:177] * Restarting existing docker container for "stopped-upgrade-878231" ...
	I1024 19:34:06.178581  645550 cli_runner.go:164] Run: docker start stopped-upgrade-878231
	I1024 19:34:06.542574  645550 cli_runner.go:164] Run: docker container inspect stopped-upgrade-878231 --format={{.State.Status}}
	I1024 19:34:06.574368  645550 kic.go:427] container "stopped-upgrade-878231" state is running.
	I1024 19:34:06.605385  645550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-878231
	I1024 19:34:06.636345  645550 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/stopped-upgrade-878231/config.json ...
	I1024 19:34:06.665511  645550 machine.go:88] provisioning docker machine ...
	I1024 19:34:06.665587  645550 ubuntu.go:169] provisioning hostname "stopped-upgrade-878231"
	I1024 19:34:06.665675  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:06.694553  645550 main.go:141] libmachine: Using SSH client type: native
	I1024 19:34:06.694917  645550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1024 19:34:06.694935  645550 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-878231 && echo "stopped-upgrade-878231" | sudo tee /etc/hostname
	I1024 19:34:06.695795  645550 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45224->127.0.0.1:33389: read: connection reset by peer
	I1024 19:34:09.823316  645550 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-878231
	
	I1024 19:34:09.823437  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:09.849157  645550 main.go:141] libmachine: Using SSH client type: native
	I1024 19:34:09.849564  645550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1024 19:34:09.849586  645550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-878231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-878231/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-878231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:34:09.961133  645550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:34:09.961170  645550 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-471553/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-471553/.minikube}
	I1024 19:34:09.961220  645550 ubuntu.go:177] setting up certificates
	I1024 19:34:09.961234  645550 provision.go:83] configureAuth start
	I1024 19:34:09.961289  645550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-878231
	I1024 19:34:09.981587  645550 provision.go:138] copyHostCerts
	I1024 19:34:09.981667  645550 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem, removing ...
	I1024 19:34:09.981692  645550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:34:09.981768  645550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem (1675 bytes)
	I1024 19:34:09.981889  645550 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem, removing ...
	I1024 19:34:09.981903  645550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:34:09.981944  645550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem (1082 bytes)
	I1024 19:34:09.982020  645550 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem, removing ...
	I1024 19:34:09.982030  645550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:34:09.982063  645550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem (1123 bytes)
	I1024 19:34:09.982127  645550 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-878231 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-878231]
	I1024 19:34:10.237913  645550 provision.go:172] copyRemoteCerts
	I1024 19:34:10.238001  645550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:34:10.238051  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:10.260001  645550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/stopped-upgrade-878231/id_rsa Username:docker}
	I1024 19:34:10.346473  645550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:34:10.369990  645550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 19:34:10.394627  645550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:34:10.418986  645550 provision.go:86] duration metric: configureAuth took 457.735ms
	I1024 19:34:10.419116  645550 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:34:10.419503  645550 config.go:182] Loaded profile config "stopped-upgrade-878231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I1024 19:34:10.419701  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:10.442364  645550 main.go:141] libmachine: Using SSH client type: native
	I1024 19:34:10.442923  645550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33389 <nil> <nil>}
	I1024 19:34:10.442971  645550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:34:11.250675  645550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:34:11.250707  645550 machine.go:91] provisioned docker machine in 4.585156298s
	I1024 19:34:11.250722  645550 start.go:300] post-start starting for "stopped-upgrade-878231" (driver="docker")
	I1024 19:34:11.250736  645550 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:34:11.250804  645550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:34:11.250870  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:11.275387  645550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/stopped-upgrade-878231/id_rsa Username:docker}
	I1024 19:34:11.362346  645550 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:34:11.367546  645550 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:34:11.367584  645550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:34:11.367600  645550 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:34:11.367608  645550 info.go:137] Remote host: Ubuntu 19.10
	I1024 19:34:11.367620  645550 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/addons for local assets ...
	I1024 19:34:11.367679  645550 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/files for local assets ...
	I1024 19:34:11.367800  645550 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> 4783232.pem in /etc/ssl/certs
	I1024 19:34:11.367916  645550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:34:11.377862  645550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:34:11.401388  645550 start.go:303] post-start completed in 150.648389ms
	I1024 19:34:11.401491  645550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:34:11.401546  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:11.422252  645550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/stopped-upgrade-878231/id_rsa Username:docker}
	I1024 19:34:11.501984  645550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:34:11.506755  645550 fix.go:56] fixHost completed within 5.361552606s
	I1024 19:34:11.506786  645550 start.go:83] releasing machines lock for "stopped-upgrade-878231", held for 5.361608496s
	I1024 19:34:11.506889  645550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-878231
	I1024 19:34:11.528875  645550 ssh_runner.go:195] Run: cat /version.json
	I1024 19:34:11.528983  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:11.528991  645550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:34:11.529070  645550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-878231
	I1024 19:34:11.555992  645550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/stopped-upgrade-878231/id_rsa Username:docker}
	I1024 19:34:11.557679  645550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33389 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/stopped-upgrade-878231/id_rsa Username:docker}
	W1024 19:34:11.636612  645550 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 19:34:11.636706  645550 ssh_runner.go:195] Run: systemctl --version
	I1024 19:34:11.666985  645550 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:34:11.728760  645550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:34:11.733795  645550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:34:11.753466  645550 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:34:11.753584  645550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:34:11.781968  645550 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:34:11.781999  645550 start.go:472] detecting cgroup driver to use...
	I1024 19:34:11.782043  645550 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:34:11.782095  645550 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:34:11.810422  645550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:34:11.823292  645550 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:34:11.823387  645550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:34:11.835618  645550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:34:11.848721  645550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1024 19:34:11.861146  645550 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1024 19:34:11.861269  645550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:34:11.940217  645550 docker.go:214] disabling docker service ...
	I1024 19:34:11.940310  645550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:34:11.952971  645550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:34:11.965233  645550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:34:12.039841  645550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:34:12.116831  645550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:34:12.128736  645550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:34:12.147326  645550 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 19:34:12.147429  645550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:34:12.161539  645550 out.go:177] 
	W1024 19:34:12.163995  645550 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1024 19:34:12.164049  645550 out.go:239] * 
	* 
	W1024 19:34:12.165471  645550 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 19:34:12.169220  645550 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-878231 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (77.06s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (62.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-639553 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-639553 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (55.198988483s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-639553] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-639553 in cluster pause-639553
	* Pulling base image ...
	* Updating the running docker "pause-639553" container ...
	* Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-639553" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:35:24.868811  661242 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:35:24.869038  661242 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:35:24.869056  661242 out.go:309] Setting ErrFile to fd 2...
	I1024 19:35:24.869072  661242 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:35:24.869419  661242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:35:24.870135  661242 out.go:303] Setting JSON to false
	I1024 19:35:24.872176  661242 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11872,"bootTime":1698164253,"procs":555,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:35:24.872337  661242 start.go:138] virtualization: kvm guest
	I1024 19:35:24.875476  661242 out.go:177] * [pause-639553] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:35:24.877333  661242 notify.go:220] Checking for updates...
	I1024 19:35:24.880861  661242 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:35:24.887315  661242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:35:24.889580  661242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:35:24.891465  661242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:35:24.893228  661242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:35:24.895139  661242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:35:24.898464  661242 config.go:182] Loaded profile config "pause-639553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:24.899334  661242 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:35:24.939688  661242 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:35:24.939790  661242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:35:25.029521  661242 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-24 19:35:25.01267319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:35:25.029649  661242 docker.go:295] overlay module found
	I1024 19:35:25.032013  661242 out.go:177] * Using the docker driver based on existing profile
	I1024 19:35:25.033810  661242 start.go:298] selected driver: docker
	I1024 19:35:25.033829  661242 start.go:902] validating driver "docker" against &{Name:pause-639553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-639553 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:35:25.033988  661242 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:35:25.034094  661242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:35:25.131761  661242 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-24 19:35:25.121723178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:35:25.132733  661242 cni.go:84] Creating CNI manager for ""
	I1024 19:35:25.132764  661242 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:35:25.132818  661242 start_flags.go:323] config:
	{Name:pause-639553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-639553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false s
torage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:35:25.135773  661242 out.go:177] * Starting control plane node pause-639553 in cluster pause-639553
	I1024 19:35:25.137774  661242 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:35:25.139644  661242 out.go:177] * Pulling base image ...
	I1024 19:35:25.141321  661242 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:35:25.141396  661242 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:35:25.141411  661242 cache.go:57] Caching tarball of preloaded images
	I1024 19:35:25.141526  661242 preload.go:174] Found /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1024 19:35:25.141538  661242 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:35:25.141725  661242 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/config.json ...
	I1024 19:35:25.142049  661242 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:35:25.163956  661242 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:35:25.163997  661242 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:35:25.164017  661242 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:35:25.164071  661242 start.go:365] acquiring machines lock for pause-639553: {Name:mkd92adb49a1f5f119f2e9e0d0502956f25378e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:35:25.164138  661242 start.go:369] acquired machines lock for "pause-639553" in 41.017µs
	I1024 19:35:25.164159  661242 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:35:25.164169  661242 fix.go:54] fixHost starting: 
	I1024 19:35:25.164465  661242 cli_runner.go:164] Run: docker container inspect pause-639553 --format={{.State.Status}}
	I1024 19:35:25.187627  661242 fix.go:102] recreateIfNeeded on pause-639553: state=Running err=<nil>
	W1024 19:35:25.187672  661242 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:35:25.190534  661242 out.go:177] * Updating the running docker "pause-639553" container ...
	I1024 19:35:25.192316  661242 machine.go:88] provisioning docker machine ...
	I1024 19:35:25.192376  661242 ubuntu.go:169] provisioning hostname "pause-639553"
	I1024 19:35:25.192462  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:25.219612  661242 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:25.220199  661242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1024 19:35:25.220227  661242 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-639553 && echo "pause-639553" | sudo tee /etc/hostname
	I1024 19:35:25.381371  661242 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-639553
	
	I1024 19:35:25.381478  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:25.405828  661242 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:25.406347  661242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1024 19:35:25.406380  661242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-639553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-639553/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-639553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:35:25.535173  661242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:35:25.535209  661242 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-471553/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-471553/.minikube}
	I1024 19:35:25.535266  661242 ubuntu.go:177] setting up certificates
	I1024 19:35:25.535280  661242 provision.go:83] configureAuth start
	I1024 19:35:25.535344  661242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-639553
	I1024 19:35:25.559525  661242 provision.go:138] copyHostCerts
	I1024 19:35:25.559618  661242 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem, removing ...
	I1024 19:35:25.559631  661242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem
	I1024 19:35:25.559739  661242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/ca.pem (1082 bytes)
	I1024 19:35:25.559879  661242 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem, removing ...
	I1024 19:35:25.559887  661242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem
	I1024 19:35:25.559923  661242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/cert.pem (1123 bytes)
	I1024 19:35:25.560021  661242 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem, removing ...
	I1024 19:35:25.560027  661242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem
	I1024 19:35:25.560055  661242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-471553/.minikube/key.pem (1675 bytes)
	I1024 19:35:25.560130  661242 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem org=jenkins.pause-639553 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-639553]
	I1024 19:35:25.662606  661242 provision.go:172] copyRemoteCerts
	I1024 19:35:25.662706  661242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:35:25.662791  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:25.683924  661242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/pause-639553/id_rsa Username:docker}
	I1024 19:35:25.778710  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:35:25.807332  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1024 19:35:25.839253  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:35:25.872900  661242 provision.go:86] duration metric: configureAuth took 337.575ms
	I1024 19:35:25.872942  661242 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:35:25.873303  661242 config.go:182] Loaded profile config "pause-639553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:25.873444  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:25.904684  661242 main.go:141] libmachine: Using SSH client type: native
	I1024 19:35:25.905402  661242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1024 19:35:25.905435  661242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:35:31.376577  661242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:35:31.376617  661242 machine.go:91] provisioned docker machine in 6.184271924s
	I1024 19:35:31.376634  661242 start.go:300] post-start starting for "pause-639553" (driver="docker")
	I1024 19:35:31.376647  661242 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:35:31.376756  661242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:35:31.376869  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:31.399651  661242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/pause-639553/id_rsa Username:docker}
	I1024 19:35:31.501795  661242 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:35:31.507189  661242 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:35:31.507236  661242 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:35:31.507254  661242 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:35:31.507265  661242 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:35:31.507279  661242 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/addons for local assets ...
	I1024 19:35:31.507338  661242 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-471553/.minikube/files for local assets ...
	I1024 19:35:31.507467  661242 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem -> 4783232.pem in /etc/ssl/certs
	I1024 19:35:31.507586  661242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:35:31.518739  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:35:31.548689  661242 start.go:303] post-start completed in 172.035877ms
	I1024 19:35:31.548785  661242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:35:31.548846  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:31.569058  661242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/pause-639553/id_rsa Username:docker}
	I1024 19:35:31.660256  661242 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:35:31.667225  661242 fix.go:56] fixHost completed within 6.503042466s
	I1024 19:35:31.667273  661242 start.go:83] releasing machines lock for "pause-639553", held for 6.503120048s
	I1024 19:35:31.667395  661242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-639553
	I1024 19:35:31.694157  661242 ssh_runner.go:195] Run: cat /version.json
	I1024 19:35:31.694217  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:31.694273  661242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:35:31.694331  661242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-639553
	I1024 19:35:31.717766  661242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/pause-639553/id_rsa Username:docker}
	I1024 19:35:31.718603  661242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/pause-639553/id_rsa Username:docker}
	I1024 19:35:31.813740  661242 ssh_runner.go:195] Run: systemctl --version
	I1024 19:35:31.939025  661242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:35:32.159224  661242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:35:32.165987  661242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:35:32.181414  661242 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:35:32.181518  661242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:35:32.250458  661242 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1024 19:35:32.250493  661242 start.go:472] detecting cgroup driver to use...
	I1024 19:35:32.250543  661242 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:35:32.250591  661242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:35:32.341968  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:35:32.368755  661242 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:35:32.368840  661242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:35:32.548408  661242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:35:32.570680  661242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:35:33.094784  661242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:35:33.474128  661242 docker.go:214] disabling docker service ...
	I1024 19:35:33.474215  661242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:35:33.554331  661242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:35:33.573155  661242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:35:33.956001  661242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:35:34.270598  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:35:34.341580  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:35:34.370593  661242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:35:34.370667  661242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:35:34.384881  661242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:35:34.384995  661242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:35:34.450751  661242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:35:34.463856  661242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:35:34.485084  661242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:35:34.550235  661242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:35:34.563906  661242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:35:34.574417  661242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:35:34.963566  661242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:35:42.841015  661242 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.877404513s)
	I1024 19:35:42.841051  661242 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:35:42.841122  661242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:35:42.845754  661242 start.go:540] Will wait 60s for crictl version
	I1024 19:35:42.845827  661242 ssh_runner.go:195] Run: which crictl
	I1024 19:35:42.849580  661242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:35:42.887778  661242 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:35:42.887876  661242 ssh_runner.go:195] Run: crio --version
	I1024 19:35:42.930600  661242 ssh_runner.go:195] Run: crio --version
	I1024 19:35:42.973339  661242 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 19:35:42.974834  661242 cli_runner.go:164] Run: docker network inspect pause-639553 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:35:42.997270  661242 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1024 19:35:43.002274  661242 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:35:43.002347  661242 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:35:43.046070  661242 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:35:43.046121  661242 crio.go:415] Images already preloaded, skipping extraction
	I1024 19:35:43.046183  661242 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:35:43.092519  661242 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:35:43.092546  661242 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:35:43.092609  661242 ssh_runner.go:195] Run: crio config
	I1024 19:35:43.161729  661242 cni.go:84] Creating CNI manager for ""
	I1024 19:35:43.161756  661242 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:35:43.161789  661242 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:35:43.161827  661242 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-639553 NodeName:pause-639553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:35:43.161987  661242 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-639553"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:35:43.162061  661242 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-639553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:pause-639553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:35:43.162124  661242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:35:43.173428  661242 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:35:43.173514  661242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:35:43.184522  661242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1024 19:35:43.207704  661242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:35:43.234225  661242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1024 19:35:43.257305  661242 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:35:43.261779  661242 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553 for IP: 192.168.67.2
	I1024 19:35:43.261847  661242 certs.go:190] acquiring lock for shared ca certs: {Name:mkd071e4924662af2a94ad3f2018330ff8506826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:35:43.262097  661242 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key
	I1024 19:35:43.262137  661242 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key
	I1024 19:35:43.262213  661242 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.key
	I1024 19:35:43.262284  661242 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/apiserver.key.c7fa3a9e
	I1024 19:35:43.262318  661242 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/proxy-client.key
	I1024 19:35:43.262438  661242 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem (1338 bytes)
	W1024 19:35:43.262497  661242 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323_empty.pem, impossibly tiny 0 bytes
	I1024 19:35:43.262510  661242 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:35:43.262564  661242 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:35:43.262600  661242 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:35:43.262641  661242 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/certs/home/jenkins/minikube-integration/17485-471553/.minikube/certs/key.pem (1675 bytes)
	I1024 19:35:43.262691  661242 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem (1708 bytes)
	I1024 19:35:43.263924  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:35:43.293842  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:35:43.321604  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:35:43.350003  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:35:43.385815  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:35:43.428859  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1024 19:35:43.457535  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:35:43.484702  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:35:43.511142  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:35:43.541996  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/certs/478323.pem --> /usr/share/ca-certificates/478323.pem (1338 bytes)
	I1024 19:35:43.568233  661242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/ssl/certs/4783232.pem --> /usr/share/ca-certificates/4783232.pem (1708 bytes)
	I1024 19:35:43.593409  661242 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:35:43.614925  661242 ssh_runner.go:195] Run: openssl version
	I1024 19:35:43.620694  661242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:35:43.631166  661242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:43.635191  661242 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:43.635244  661242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:35:43.642841  661242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:35:43.655156  661242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/478323.pem && ln -fs /usr/share/ca-certificates/478323.pem /etc/ssl/certs/478323.pem"
	I1024 19:35:43.667592  661242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/478323.pem
	I1024 19:35:43.672269  661242 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:07 /usr/share/ca-certificates/478323.pem
	I1024 19:35:43.672355  661242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/478323.pem
	I1024 19:35:43.682335  661242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/478323.pem /etc/ssl/certs/51391683.0"
	I1024 19:35:43.696630  661242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4783232.pem && ln -fs /usr/share/ca-certificates/4783232.pem /etc/ssl/certs/4783232.pem"
	I1024 19:35:43.710573  661242 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4783232.pem
	I1024 19:35:43.716914  661242 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:07 /usr/share/ca-certificates/4783232.pem
	I1024 19:35:43.716996  661242 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4783232.pem
	I1024 19:35:43.727763  661242 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4783232.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:35:43.741287  661242 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:35:43.745804  661242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 19:35:43.754814  661242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 19:35:43.763735  661242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 19:35:43.773329  661242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 19:35:43.781329  661242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 19:35:43.789574  661242 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 19:35:43.798921  661242 kubeadm.go:404] StartCluster: {Name:pause-639553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:pause-639553 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-al
iases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:35:43.799180  661242 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:35:43.799249  661242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:35:43.859001  661242 cri.go:89] found id: "0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e"
	I1024 19:35:43.859030  661242 cri.go:89] found id: "3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1"
	I1024 19:35:43.859037  661242 cri.go:89] found id: "72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917"
	I1024 19:35:43.859043  661242 cri.go:89] found id: "d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311"
	I1024 19:35:43.859049  661242 cri.go:89] found id: "befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76"
	I1024 19:35:43.859053  661242 cri.go:89] found id: "9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b"
	I1024 19:35:43.859057  661242 cri.go:89] found id: "2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda"
	I1024 19:35:43.859063  661242 cri.go:89] found id: ""
	I1024 19:35:43.859125  661242 ssh_runner.go:195] Run: sudo runc list -f json
	I1024 19:35:43.880745  661242 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e/userdata","rootfs":"/var/lib/containers/storage/overlay/df944194c4eabb41be2ac68fda511e1dcfba846307b1fe5449d82a13547820c9/merged","created":"2023-10-24T19:35:32.456816071Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"83906433","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"83906433\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.t
erminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:35:32.096331981Z","io.kubernetes.cri-o.Image":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.3","io.kubernetes.cri-o.ImageRef":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-639553\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e7f8311e8be10bf9f993f5c0b107b6b3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-639553_e7f8311e8be10bf9f993f5c0b107b6b3/kube-controller-manager/1.log","io.kubernetes.cr
i-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/df944194c4eabb41be2ac68fda511e1dcfba846307b1fe5449d82a13547820c9/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-639553_kube-system_e7f8311e8be10bf9f993f5c0b107b6b3_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/820612a14de0640d1440671847d76cf5404163f833769bd648a74a84e25abe5b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"820612a14de0640d1440671847d76cf5404163f833769bd648a74a84e25abe5b","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-639553_kube-system_e7f8311e8be10bf9f993f5c0b107b6b3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":t
rue,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e7f8311e8be10bf9f993f5c0b107b6b3/containers/kube-controller-manager/1949ef25\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e7f8311e8be10bf9f993f5c0b107b6b3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\
",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-639553","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e7f8311e8be10bf9f993f5c0b107b6b3","kubernetes.io/config.hash":"e7f8311e8be10bf9f993f5c0b107b6b3","kubernetes.io/config.seen":"2023-10-24T19:35:00.358109955Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2a119
c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda/userdata","rootfs":"/var/lib/containers/storage/overlay/4c34a6f45635f6e2a9e6702cfa5ecad6f30f71a58ba172d405762e7b7ed278e2/merged","created":"2023-10-24T19:35:32.144830763Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1ad01b6e","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1ad01b6e\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cr
i-o.Created":"2023-10-24T19:35:31.937116612Z","io.kubernetes.cri-o.Image":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.3","io.kubernetes.cri-o.ImageRef":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-6r7cb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f30348b5-115d-4161-a406-07b8e208de06\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-6r7cb_f30348b5-115d-4161-a406-07b8e208de06/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c34a6f45635f6e2a9e6702cfa5ecad6f30f71a58ba172d405762e7b7ed278e2/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-6r7cb_kube-system_f30348b5-115d-4161-a406-07b8e208de06_1","io.kubernetes.
cri-o.ResolvPath":"/run/containers/storage/overlay-containers/61ed8c57dfc2c7a865764231e323b7f3f9202e7f93ee33e69263f7088faae46d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"61ed8c57dfc2c7a865764231e323b7f3f9202e7f93ee33e69263f7088faae46d","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-6r7cb_kube-system_f30348b5-115d-4161-a406-07b8e208de06_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f30348b5-115d-4161-a406-07b8e208de06/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/
dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f30348b5-115d-4161-a406-07b8e208de06/containers/kube-proxy/67091a71\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/f30348b5-115d-4161-a406-07b8e208de06/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f30348b5-115d-4161-a406-07b8e208de06/volumes/kubernetes.io~projected/kube-api-access-blgv2\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-6r7cb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f30348b5-115d-4161-a406-07b8e208de06","kubernetes.io/config.seen":"2023-10-24T19:35:20.241998556Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1/userdata","rootfs":"/var/lib/containers/storage/overlay/4df040690424f12282555e6a4a13f6c75799ead4f17f9aee5f69a35f028f8542/merged","created":"2023-10-24T19:35:32.363419416Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eae50b7b","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eae50b7b\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io
.kubernetes.cri-o.ContainerID":"3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:35:32.069662836Z","io.kubernetes.cri-o.Image":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.3","io.kubernetes.cri-o.ImageRef":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-639553\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a9568a5c0c150dec1c51aa92981aefde\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-639553_a9568a5c0c150dec1c51aa92981aefde/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4df0406
90424f12282555e6a4a13f6c75799ead4f17f9aee5f69a35f028f8542/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-639553_kube-system_a9568a5c0c150dec1c51aa92981aefde_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5a2daf0b2d6177aae8961b86c5bb3d2995f9001efa537e00b9dc9078fd541ca0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5a2daf0b2d6177aae8961b86c5bb3d2995f9001efa537e00b9dc9078fd541ca0","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-639553_kube-system_a9568a5c0c150dec1c51aa92981aefde_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a9568a5c0c150dec1c51aa92981aefde/containers/kube-apiserver/1ff45d39\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_pa
th\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a9568a5c0c150dec1c51aa92981aefde/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-639553","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.termination
GracePeriod":"30","io.kubernetes.pod.uid":"a9568a5c0c150dec1c51aa92981aefde","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"a9568a5c0c150dec1c51aa92981aefde","kubernetes.io/config.seen":"2023-10-24T19:35:00.358108135Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917/userdata","rootfs":"/var/lib/containers/storage/overlay/0efb734602e9a76fdd84ca5937811fe74cbdbe21998347bb15c3f69ffa4255dc/merged","created":"2023-10-24T19:35:32.265649987Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"51a89f40","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.conta
iner.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"51a89f40\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:35:32.053019959Z","io.kubernetes.cri-o.Image":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-j6kq7\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.ku
bernetes.pod.uid\":\"efda4578-700d-40de-a3f9-060bebdfddc6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-j6kq7_efda4578-700d-40de-a3f9-060bebdfddc6/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0efb734602e9a76fdd84ca5937811fe74cbdbe21998347bb15c3f69ffa4255dc/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-j6kq7_kube-system_efda4578-700d-40de-a3f9-060bebdfddc6_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/25d0797df60c88cf4246a02717c363a0cc375e9632f013bf3cb154625ffc7779/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"25d0797df60c88cf4246a02717c363a0cc375e9632f013bf3cb154625ffc7779","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-j6kq7_kube-system_efda4578-700d-40de-a3f9-060bebdfddc6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o
.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/efda4578-700d-40de-a3f9-060bebdfddc6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/efda4578-700d-40de-a3f9-060bebdfddc6/containers/kindnet-cni/d790b1d1\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/efda4578-700d-40de-a3f9-060bebdfddc6/volumes/kubernetes.i
o~projected/kube-api-access-2q8t4\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-j6kq7","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"efda4578-700d-40de-a3f9-060bebdfddc6","kubernetes.io/config.seen":"2023-10-24T19:35:20.175603961Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b/userdata","rootfs":"/var/lib/containers/storage/overlay/9bf5d4e7150e3461928b6fb5ab503ac265d8974b832d8bd9ad4dbfe2a72b6c57/merged","created":"2023-10-24T19:35:32.175343712Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"98ed06a1","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.term
inationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"98ed06a1\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:35:31.958396701Z","io.kubernetes.cri-o.Image":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-639553\",\"io.kuberne
tes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"de7c65e86bbd215ff3bee3f8344c132e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-639553_de7c65e86bbd215ff3bee3f8344c132e/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9bf5d4e7150e3461928b6fb5ab503ac265d8974b832d8bd9ad4dbfe2a72b6c57/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-639553_kube-system_de7c65e86bbd215ff3bee3f8344c132e_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/648d9baaf7da61bc1563350a22baf9f1595213e6d7652be47d64edeb53d8a389/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"648d9baaf7da61bc1563350a22baf9f1595213e6d7652be47d64edeb53d8a389","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-639553_kube-system_de7c65e86bbd215ff3bee3f8344c132e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","i
o.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/de7c65e86bbd215ff3bee3f8344c132e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/de7c65e86bbd215ff3bee3f8344c132e/containers/etcd/c4e79dc3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-639553","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"de7c65e86bbd215ff3bee3f8344c132e","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https
://192.168.67.2:2379","kubernetes.io/config.hash":"de7c65e86bbd215ff3bee3f8344c132e","kubernetes.io/config.seen":"2023-10-24T19:35:00.358099789Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76/userdata","rootfs":"/var/lib/containers/storage/overlay/ebc73e6f869e94fb6268414e86835b3cc37cebc0b7f694c6b3491a0c809cb72e/merged","created":"2023-10-24T19:35:32.17192287Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1a68c1c3","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1a68c1c3\",\"io.kubernetes.con
tainer.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:35:31.976391245Z","io.kubernetes.cri-o.Image":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.3","io.kubernetes.cri-o.ImageRef":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-639553\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7b76bcf0be9283790adf204c398a6bf4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler
-pause-639553_7b76bcf0be9283790adf204c398a6bf4/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ebc73e6f869e94fb6268414e86835b3cc37cebc0b7f694c6b3491a0c809cb72e/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-639553_kube-system_7b76bcf0be9283790adf204c398a6bf4_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/51f2c7b0b16b56ba7f40fc2435df2e8e4be8d701ef2427d19e593282999b06ba/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"51f2c7b0b16b56ba7f40fc2435df2e8e4be8d701ef2427d19e593282999b06ba","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-639553_kube-system_7b76bcf0be9283790adf204c398a6bf4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\"
:\"/var/lib/kubelet/pods/7b76bcf0be9283790adf204c398a6bf4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7b76bcf0be9283790adf204c398a6bf4/containers/kube-scheduler/d70fc326\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-639553","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7b76bcf0be9283790adf204c398a6bf4","kubernetes.io/config.hash":"7b76bcf0be9283790adf204c398a6bf4","kubernetes.io/config.seen":"2023-10-24T19:35:00.358111102Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311","pid":0,"status":"stop
ped","bundle":"/run/containers/storage/overlay-containers/d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311/userdata","rootfs":"/var/lib/containers/storage/overlay/13f89adc5934b25003d9ad2aed401a56d639d471cc06020f839f4a4ba0d793a5/merged","created":"2023-10-24T19:35:32.255113992Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"30b4ddad","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"30b4ddad\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"
},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:35:31.996309713Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernet
es.pod.name\":\"coredns-5dd5756b68-9m8kb\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-9m8kb_1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/13f89adc5934b25003d9ad2aed401a56d639d471cc06020f839f4a4ba0d793a5/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-9m8kb_kube-system_1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f36905aacb6af86e58836da5e018f03880704c74b13509824f71198785e645ff/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f36905aacb6af86e58836da5e018f03880704c74b13509824f71198785e645ff","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-9m8kb_kube-system_1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3_0","io.kubernetes.cri-o.
SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3/containers/coredns/c5c076d8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3/volumes/kubernetes.io~projected/kube-api-access-ntfkk\",\"readonly\":true,\"propagation\":0
,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-9m8kb","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3","kubernetes.io/config.seen":"2023-10-24T19:35:21.423966241Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1024 19:35:43.881218  661242 cri.go:126] list returned 7 containers
	I1024 19:35:43.881237  661242 cri.go:129] container: {ID:0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e Status:stopped}
	I1024 19:35:43.881279  661242 cri.go:135] skipping {0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e stopped}: state = "stopped", want "paused"
	I1024 19:35:43.881300  661242 cri.go:129] container: {ID:2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda Status:stopped}
	I1024 19:35:43.881314  661242 cri.go:135] skipping {2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda stopped}: state = "stopped", want "paused"
	I1024 19:35:43.881326  661242 cri.go:129] container: {ID:3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1 Status:stopped}
	I1024 19:35:43.881337  661242 cri.go:135] skipping {3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1 stopped}: state = "stopped", want "paused"
	I1024 19:35:43.881349  661242 cri.go:129] container: {ID:72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917 Status:stopped}
	I1024 19:35:43.881359  661242 cri.go:135] skipping {72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917 stopped}: state = "stopped", want "paused"
	I1024 19:35:43.881370  661242 cri.go:129] container: {ID:9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b Status:stopped}
	I1024 19:35:43.881380  661242 cri.go:135] skipping {9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b stopped}: state = "stopped", want "paused"
	I1024 19:35:43.881392  661242 cri.go:129] container: {ID:befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76 Status:stopped}
	I1024 19:35:43.881402  661242 cri.go:135] skipping {befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76 stopped}: state = "stopped", want "paused"
	I1024 19:35:43.881413  661242 cri.go:129] container: {ID:d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311 Status:stopped}
	I1024 19:35:43.881423  661242 cri.go:135] skipping {d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311 stopped}: state = "stopped", want "paused"
	I1024 19:35:43.881479  661242 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:35:43.892902  661242 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 19:35:43.892943  661242 kubeadm.go:636] restartCluster start
	I1024 19:35:43.892992  661242 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 19:35:43.903787  661242 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:43.904456  661242 kubeconfig.go:92] found "pause-639553" server: "https://192.168.67.2:8443"
	I1024 19:35:43.905395  661242 kapi.go:59] client config for pause-639553: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:35:43.906190  661242 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 19:35:43.916052  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:43.916132  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:43.929110  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:43.929139  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:43.929178  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:43.940393  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:44.441177  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:44.441279  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:44.455804  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:44.940824  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:44.940908  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:44.953862  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:45.441536  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:45.441638  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:45.455692  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:45.941230  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:45.941337  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:45.954129  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:46.440632  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:46.440765  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:46.452484  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:46.940988  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:46.941129  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:46.955379  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:47.440936  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:47.441052  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:47.451947  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:47.941549  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:47.941827  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:47.956753  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:48.441371  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:48.441479  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:48.455895  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:48.941537  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:48.941633  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:48.952463  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:49.440888  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:49.441004  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:49.453076  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:49.941164  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:49.941319  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:49.953391  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:50.440898  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:50.440997  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:50.454552  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:50.941129  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:50.941223  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:50.959233  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:51.440757  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:51.440889  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:51.453582  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:51.941302  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:51.941402  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:51.959386  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:52.441054  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:52.441138  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:52.455898  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:52.941407  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:52.941495  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:52.952969  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:53.440576  661242 api_server.go:166] Checking apiserver status ...
	I1024 19:35:53.440668  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:35:53.454729  661242 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:53.916417  661242 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 19:35:53.916459  661242 kubeadm.go:1128] stopping kube-system containers ...
	I1024 19:35:53.916475  661242 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 19:35:53.916629  661242 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:35:53.964023  661242 cri.go:89] found id: "7556dd7e77654b82a85d9084c0ecdd4d2247163f51098b477845e37c6b4832b7"
	I1024 19:35:53.964048  661242 cri.go:89] found id: "f02c9006c5461fdb26a7158b616cd24749daedea6b0c4d0066c0016c947d9fe6"
	I1024 19:35:53.964054  661242 cri.go:89] found id: "e616aa8f6da1b319d518f5a6de368ac08f1e1a4e9122121d273a6594f58b381a"
	I1024 19:35:53.964060  661242 cri.go:89] found id: "0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e"
	I1024 19:35:53.964065  661242 cri.go:89] found id: "3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1"
	I1024 19:35:53.964071  661242 cri.go:89] found id: "72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917"
	I1024 19:35:53.964075  661242 cri.go:89] found id: "d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311"
	I1024 19:35:53.964080  661242 cri.go:89] found id: "befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76"
	I1024 19:35:53.964085  661242 cri.go:89] found id: "9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b"
	I1024 19:35:53.964094  661242 cri.go:89] found id: "2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda"
	I1024 19:35:53.964100  661242 cri.go:89] found id: ""
	I1024 19:35:53.964108  661242 cri.go:234] Stopping containers: [7556dd7e77654b82a85d9084c0ecdd4d2247163f51098b477845e37c6b4832b7 f02c9006c5461fdb26a7158b616cd24749daedea6b0c4d0066c0016c947d9fe6 e616aa8f6da1b319d518f5a6de368ac08f1e1a4e9122121d273a6594f58b381a 0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e 3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1 72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917 d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311 befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76 9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b 2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda]
	I1024 19:35:53.964179  661242 ssh_runner.go:195] Run: which crictl
	I1024 19:35:53.969526  661242 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 7556dd7e77654b82a85d9084c0ecdd4d2247163f51098b477845e37c6b4832b7 f02c9006c5461fdb26a7158b616cd24749daedea6b0c4d0066c0016c947d9fe6 e616aa8f6da1b319d518f5a6de368ac08f1e1a4e9122121d273a6594f58b381a 0db7e3bb8e63c96adf965c651b784dda9eea4f344569aeb5e0254ba038feb46e 3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1 72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917 d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311 befe3a8c1e49dbb7da45eaa18430c42232946aafbb413f2f902984155ef7cc76 9573eab2b7bdfe2764e7225e49b7208435d9d22163211292cde3c6e2343ec60b 2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda
	I1024 19:35:54.500129  661242 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 19:35:54.627731  661242 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:35:54.636529  661242 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 24 19:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 24 19:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct 24 19:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 24 19:34 /etc/kubernetes/scheduler.conf
	
	I1024 19:35:54.636606  661242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1024 19:35:54.645288  661242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1024 19:35:54.653609  661242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1024 19:35:54.662475  661242 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:54.662550  661242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1024 19:35:54.672352  661242 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1024 19:35:54.682121  661242 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:35:54.682177  661242 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1024 19:35:54.692940  661242 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:35:54.704465  661242 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 19:35:54.704503  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:54.781364  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:55.992955  661242 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.21154064s)
	I1024 19:35:55.992996  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:56.184346  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:56.260565  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:35:56.445728  661242 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:35:56.445809  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:56.464971  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:56.982036  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:57.481969  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:35:57.541382  661242 api_server.go:72] duration metric: took 1.095651639s to wait for apiserver process to appear ...
	I1024 19:35:57.541412  661242 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:35:57.541434  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:35:57.541802  661242 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1024 19:35:57.541857  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:35:57.542162  661242 api_server.go:269] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I1024 19:35:58.042879  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:00.219338  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 19:36:00.219373  661242 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 19:36:00.219404  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:00.364824  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:36:00.364863  661242 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:36:00.543331  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:00.549596  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:36:00.549620  661242 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:36:01.042304  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:01.053490  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:36:01.053562  661242 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:36:01.543233  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:01.550491  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1024 19:36:01.561233  661242 api_server.go:141] control plane version: v1.28.3
	I1024 19:36:01.561272  661242 api_server.go:131] duration metric: took 4.019851714s to wait for apiserver health ...
	I1024 19:36:01.561287  661242 cni.go:84] Creating CNI manager for ""
	I1024 19:36:01.561294  661242 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:36:01.563825  661242 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:36:01.566249  661242 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:36:01.571267  661242 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:36:01.571299  661242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:36:01.594266  661242 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:36:02.406757  661242 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:36:02.417963  661242 system_pods.go:59] 7 kube-system pods found
	I1024 19:36:02.418081  661242 system_pods.go:61] "coredns-5dd5756b68-9m8kb" [1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:36:02.418122  661242 system_pods.go:61] "etcd-pause-639553" [9000cce4-12d9-4d30-a847-437b7331ff5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 19:36:02.418148  661242 system_pods.go:61] "kindnet-j6kq7" [efda4578-700d-40de-a3f9-060bebdfddc6] Running
	I1024 19:36:02.418169  661242 system_pods.go:61] "kube-apiserver-pause-639553" [b49624f5-6926-4792-86d6-a8a07392bb1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 19:36:02.418191  661242 system_pods.go:61] "kube-controller-manager-pause-639553" [5aca0a5a-dc47-41f9-9cb4-2606c751a3e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 19:36:02.418206  661242 system_pods.go:61] "kube-proxy-6r7cb" [f30348b5-115d-4161-a406-07b8e208de06] Running
	I1024 19:36:02.418240  661242 system_pods.go:61] "kube-scheduler-pause-639553" [3b5157fb-0e71-496b-842f-44d63022e3c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 19:36:02.418262  661242 system_pods.go:74] duration metric: took 11.476916ms to wait for pod list to return data ...
	I1024 19:36:02.418280  661242 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:36:02.421946  661242 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:36:02.421976  661242 node_conditions.go:123] node cpu capacity is 8
	I1024 19:36:02.421989  661242 node_conditions.go:105] duration metric: took 3.695733ms to run NodePressure ...
	I1024 19:36:02.422012  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:36:02.688891  661242 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 19:36:02.694852  661242 kubeadm.go:787] kubelet initialised
	I1024 19:36:02.694884  661242 kubeadm.go:788] duration metric: took 5.963888ms waiting for restarted kubelet to initialise ...
	I1024 19:36:02.694899  661242 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:02.701629  661242 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:04.723380  661242 pod_ready.go:102] pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:36:06.724663  661242 pod_ready.go:102] pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:36:08.223882  661242 pod_ready.go:92] pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:08.223910  661242 pod_ready.go:81] duration metric: took 5.522253631s waiting for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:08.223920  661242 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:09.745821  661242 pod_ready.go:92] pod "etcd-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:09.745852  661242 pod_ready.go:81] duration metric: took 1.521926143s waiting for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:09.745881  661242 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:09.752908  661242 pod_ready.go:92] pod "kube-apiserver-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:09.752940  661242 pod_ready.go:81] duration metric: took 7.051084ms waiting for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:09.752954  661242 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:11.828242  661242 pod_ready.go:102] pod "kube-controller-manager-pause-639553" in "kube-system" namespace has status "Ready":"False"
	I1024 19:36:14.327532  661242 pod_ready.go:102] pod "kube-controller-manager-pause-639553" in "kube-system" namespace has status "Ready":"False"
	I1024 19:36:16.328164  661242 pod_ready.go:92] pod "kube-controller-manager-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:16.328187  661242 pod_ready.go:81] duration metric: took 6.5752253s waiting for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.328197  661242 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.333363  661242 pod_ready.go:92] pod "kube-proxy-6r7cb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:16.333387  661242 pod_ready.go:81] duration metric: took 5.183562ms waiting for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.333401  661242 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.338732  661242 pod_ready.go:92] pod "kube-scheduler-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:16.338763  661242 pod_ready.go:81] duration metric: took 5.352325ms waiting for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.338774  661242 pod_ready.go:38] duration metric: took 13.643861491s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:16.338799  661242 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:36:16.348212  661242 ops.go:34] apiserver oom_adj: -16
	I1024 19:36:16.348234  661242 kubeadm.go:640] restartCluster took 32.455283089s
	I1024 19:36:16.348244  661242 kubeadm.go:406] StartCluster complete in 32.54934125s
	I1024 19:36:16.348265  661242 settings.go:142] acquiring lock: {Name:mk9f191a52d3ce53608a65d0f0798312edc39465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:36:16.348351  661242 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:36:16.349502  661242 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/kubeconfig: {Name:mkcf54ea0dedcb61df1368dce9070a6aebbbad94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:36:16.349739  661242 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:36:16.349893  661242 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:36:16.352500  661242 out.go:177] * Enabled addons: 
	I1024 19:36:16.350034  661242 config.go:182] Loaded profile config "pause-639553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:36:16.350770  661242 kapi.go:59] client config for pause-639553: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.key", CAFile:"/home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:36:16.354496  661242 addons.go:502] enable addons completed in 4.582534ms: enabled=[]
	I1024 19:36:16.358255  661242 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-639553" context rescaled to 1 replicas
	I1024 19:36:16.358305  661242 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:36:16.360482  661242 out.go:177] * Verifying Kubernetes components...
	I1024 19:36:16.362266  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:36:16.440080  661242 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 19:36:16.440164  661242 node_ready.go:35] waiting up to 6m0s for node "pause-639553" to be "Ready" ...
	I1024 19:36:16.443917  661242 node_ready.go:49] node "pause-639553" has status "Ready":"True"
	I1024 19:36:16.443946  661242 node_ready.go:38] duration metric: took 3.765253ms waiting for node "pause-639553" to be "Ready" ...
	I1024 19:36:16.443983  661242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:16.451134  661242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.622771  661242 pod_ready.go:92] pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:16.622809  661242 pod_ready.go:81] duration metric: took 171.622719ms waiting for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.622825  661242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.022364  661242 pod_ready.go:92] pod "etcd-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.022397  661242 pod_ready.go:81] duration metric: took 399.563063ms waiting for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.022417  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.421720  661242 pod_ready.go:92] pod "kube-apiserver-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.421770  661242 pod_ready.go:81] duration metric: took 399.343736ms waiting for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.421788  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.822238  661242 pod_ready.go:92] pod "kube-controller-manager-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.822328  661242 pod_ready.go:81] duration metric: took 400.52893ms waiting for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.822417  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.220815  661242 pod_ready.go:92] pod "kube-proxy-6r7cb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:18.220845  661242 pod_ready.go:81] duration metric: took 398.416603ms waiting for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.220859  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.620911  661242 pod_ready.go:92] pod "kube-scheduler-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:18.620956  661242 pod_ready.go:81] duration metric: took 400.087866ms waiting for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.620967  661242 pod_ready.go:38] duration metric: took 2.176967945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:18.621006  661242 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:36:18.621185  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:36:18.633704  661242 api_server.go:72] duration metric: took 2.275361815s to wait for apiserver process to appear ...
	I1024 19:36:18.633741  661242 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:36:18.633767  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:18.639511  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1024 19:36:18.641563  661242 api_server.go:141] control plane version: v1.28.3
	I1024 19:36:18.641605  661242 api_server.go:131] duration metric: took 7.850024ms to wait for apiserver health ...
	I1024 19:36:18.641644  661242 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:36:18.825386  661242 system_pods.go:59] 7 kube-system pods found
	I1024 19:36:18.825424  661242 system_pods.go:61] "coredns-5dd5756b68-9m8kb" [1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3] Running
	I1024 19:36:18.825429  661242 system_pods.go:61] "etcd-pause-639553" [9000cce4-12d9-4d30-a847-437b7331ff5d] Running
	I1024 19:36:18.825433  661242 system_pods.go:61] "kindnet-j6kq7" [efda4578-700d-40de-a3f9-060bebdfddc6] Running
	I1024 19:36:18.825438  661242 system_pods.go:61] "kube-apiserver-pause-639553" [b49624f5-6926-4792-86d6-a8a07392bb1f] Running
	I1024 19:36:18.825442  661242 system_pods.go:61] "kube-controller-manager-pause-639553" [5aca0a5a-dc47-41f9-9cb4-2606c751a3e2] Running
	I1024 19:36:18.825453  661242 system_pods.go:61] "kube-proxy-6r7cb" [f30348b5-115d-4161-a406-07b8e208de06] Running
	I1024 19:36:18.825458  661242 system_pods.go:61] "kube-scheduler-pause-639553" [3b5157fb-0e71-496b-842f-44d63022e3c9] Running
	I1024 19:36:18.825465  661242 system_pods.go:74] duration metric: took 183.813094ms to wait for pod list to return data ...
	I1024 19:36:18.825473  661242 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:36:19.020319  661242 default_sa.go:45] found service account: "default"
	I1024 19:36:19.020353  661242 default_sa.go:55] duration metric: took 194.871491ms for default service account to be created ...
	I1024 19:36:19.020366  661242 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:36:19.228102  661242 system_pods.go:86] 7 kube-system pods found
	I1024 19:36:19.228286  661242 system_pods.go:89] "coredns-5dd5756b68-9m8kb" [1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3] Running
	I1024 19:36:19.228899  661242 system_pods.go:89] "etcd-pause-639553" [9000cce4-12d9-4d30-a847-437b7331ff5d] Running
	I1024 19:36:19.228980  661242 system_pods.go:89] "kindnet-j6kq7" [efda4578-700d-40de-a3f9-060bebdfddc6] Running
	I1024 19:36:19.229002  661242 system_pods.go:89] "kube-apiserver-pause-639553" [b49624f5-6926-4792-86d6-a8a07392bb1f] Running
	I1024 19:36:19.229027  661242 system_pods.go:89] "kube-controller-manager-pause-639553" [5aca0a5a-dc47-41f9-9cb4-2606c751a3e2] Running
	I1024 19:36:19.229112  661242 system_pods.go:89] "kube-proxy-6r7cb" [f30348b5-115d-4161-a406-07b8e208de06] Running
	I1024 19:36:19.229133  661242 system_pods.go:89] "kube-scheduler-pause-639553" [3b5157fb-0e71-496b-842f-44d63022e3c9] Running
	I1024 19:36:19.229166  661242 system_pods.go:126] duration metric: took 208.790589ms to wait for k8s-apps to be running ...
	I1024 19:36:19.229201  661242 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:36:19.229297  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:36:19.248522  661242 system_svc.go:56] duration metric: took 19.298431ms WaitForService to wait for kubelet.
	I1024 19:36:19.248559  661242 kubeadm.go:581] duration metric: took 2.890223976s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:36:19.248583  661242 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:36:19.421683  661242 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:36:19.421721  661242 node_conditions.go:123] node cpu capacity is 8
	I1024 19:36:19.421738  661242 node_conditions.go:105] duration metric: took 173.148432ms to run NodePressure ...
	I1024 19:36:19.421757  661242 start.go:228] waiting for startup goroutines ...
	I1024 19:36:19.421767  661242 start.go:233] waiting for cluster config update ...
	I1024 19:36:19.421788  661242 start.go:242] writing updated cluster config ...
	I1024 19:36:19.475690  661242 ssh_runner.go:195] Run: rm -f paused
	I1024 19:36:19.573898  661242 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:36:19.728498  661242 out.go:177] * Done! kubectl is now configured to use "pause-639553" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-639553
helpers_test.go:235: (dbg) docker inspect pause-639553:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8",
	        "Created": "2023-10-24T19:34:51.838222613Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 652148,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:34:52.234192848Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8-json.log",
	        "Name": "/pause-639553",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-639553:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-639553",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f-init/diff:/var/lib/docker/overlay2/a59d6c70e56c008d6cc4bbed94412eb512943c9d608e3d99495b95d6ce6d39c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-639553",
	                "Source": "/var/lib/docker/volumes/pause-639553/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-639553",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-639553",
	                "name.minikube.sigs.k8s.io": "pause-639553",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "273f0029b354cfdc90f45b7aa5ea4205a2d8236614b978476430ef54c607f839",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/273f0029b354",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-639553": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "177ceb9a18a5",
	                        "pause-639553"
	                    ],
	                    "NetworkID": "00acab23e15c963cecb1fb5ff67797797bf3e5234194be048959b87d2895fc30",
	                    "EndpointID": "e8c1dc729b57439b76bcabab6194880c7c461918afa6793605cd41d2567b28b9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-639553 -n pause-639553
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-639553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-639553 logs -n 25: (3.267605361s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo docker                         | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo find                           | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo crio                           | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-973203                                     | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC | 24 Oct 23 19:35 UTC |
	| start   | -p force-systemd-flag-453049                         | force-systemd-flag-453049 | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC | 24 Oct 23 19:36 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-742303 ssh                              | cert-options-742303       | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC | 24 Oct 23 19:36 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-742303 -- sudo                       | cert-options-742303       | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-742303                               | cert-options-742303       | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	| start   | -p old-k8s-version-880692                            | old-k8s-version-880692    | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-453049 ssh cat                    | force-systemd-flag-453049 | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-453049                         | force-systemd-flag-453049 | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	| start   | -p no-preload-539193                                 | no-preload-539193         | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                          |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:36:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:36:17.660656  676108 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:36:17.660972  676108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:36:17.660980  676108 out.go:309] Setting ErrFile to fd 2...
	I1024 19:36:17.660988  676108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:36:17.661546  676108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:36:17.662556  676108 out.go:303] Setting JSON to false
	I1024 19:36:17.665578  676108 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11925,"bootTime":1698164253,"procs":582,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:36:17.665777  676108 start.go:138] virtualization: kvm guest
	I1024 19:36:17.669687  676108 out.go:177] * [no-preload-539193] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:36:17.672927  676108 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:36:17.672877  676108 notify.go:220] Checking for updates...
	I1024 19:36:17.675075  676108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:36:17.676931  676108 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:36:17.678947  676108 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:36:17.680645  676108 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:36:17.682310  676108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:36:17.684466  676108 config.go:182] Loaded profile config "kubernetes-upgrade-830809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:36:17.684645  676108 config.go:182] Loaded profile config "old-k8s-version-880692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 19:36:17.684910  676108 config.go:182] Loaded profile config "pause-639553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:36:17.685017  676108 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:36:17.718823  676108 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:36:17.719053  676108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:36:17.811828  676108 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-24 19:36:17.795518548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:36:17.812036  676108 docker.go:295] overlay module found
	I1024 19:36:17.815995  676108 out.go:177] * Using the docker driver based on user configuration
	I1024 19:36:17.818612  676108 start.go:298] selected driver: docker
	I1024 19:36:17.818646  676108 start.go:902] validating driver "docker" against <nil>
	I1024 19:36:17.818668  676108 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:36:17.820345  676108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:36:17.915040  676108 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-24 19:36:17.902933019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:36:17.915267  676108 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:36:17.915577  676108 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:36:17.917866  676108 out.go:177] * Using Docker driver with root privileges
	I1024 19:36:17.919839  676108 cni.go:84] Creating CNI manager for ""
	I1024 19:36:17.919878  676108 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:36:17.919897  676108 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:36:17.919928  676108 start_flags.go:323] config:
	{Name:no-preload-539193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-539193 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:36:17.922129  676108 out.go:177] * Starting control plane node no-preload-539193 in cluster no-preload-539193
	I1024 19:36:17.923839  676108 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:36:17.925739  676108 out.go:177] * Pulling base image ...
	I1024 19:36:17.927709  676108 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:36:17.927908  676108 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:36:17.928013  676108 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/no-preload-539193/config.json ...
	I1024 19:36:17.928096  676108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/no-preload-539193/config.json: {Name:mk3400da1cb6b7f60baffbcba34882393496de52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:36:17.928217  676108 cache.go:107] acquiring lock: {Name:mk23591311b66e09432581f0a19b8da3091dab5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928284  676108 cache.go:107] acquiring lock: {Name:mk5b5adc26d51a7eeb5339b5f64d63ce79b5d757 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928335  676108 cache.go:107] acquiring lock: {Name:mk1d5242b0bb7a3a2f80b3ab514fa4cedca6e935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928358  676108 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 19:36:17.928288  676108 cache.go:107] acquiring lock: {Name:mkaa50fc513899a38b4a3875889084928cee2bcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928405  676108 cache.go:107] acquiring lock: {Name:mk604b9526e4448eb90f0c1a95f6ae7c3da4cddc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928508  676108 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 19:36:17.928496  676108 cache.go:107] acquiring lock: {Name:mkd38e8a41cc48d7f58bbc493a23bf637325b72d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928563  676108 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 19:36:17.928567  676108 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1024 19:36:17.928614  676108 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1024 19:36:17.928224  676108 cache.go:107] acquiring lock: {Name:mk5182f83699bdccac2fab0c36cc1e8590cc670c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928380  676108 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 200.305µs
	I1024 19:36:17.928766  676108 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 19:36:17.928504  676108 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 19:36:17.928851  676108 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 19:36:17.929714  676108 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1024 19:36:17.929743  676108 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 19:36:17.929800  676108 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 19:36:17.929886  676108 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 19:36:17.929920  676108 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 19:36:17.930268  676108 cache.go:107] acquiring lock: {Name:mkb7382367c30a8818525ba9899979e6526210a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.930468  676108 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 19:36:17.931652  676108 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1024 19:36:17.931750  676108 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 19:36:17.962219  676108 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:36:17.962301  676108 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:36:17.962342  676108 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:36:17.962425  676108 start.go:365] acquiring machines lock for no-preload-539193: {Name:mk7f4f8343db834aa651184658c69a40f5e62fbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.962631  676108 start.go:369] acquired machines lock for "no-preload-539193" in 164.529µs
	I1024 19:36:17.962680  676108 start.go:93] Provisioning new machine with config: &{Name:no-preload-539193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-539193 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:36:17.962927  676108 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:36:16.860962  672747 cli_runner.go:164] Run: docker network inspect old-k8s-version-880692 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:36:16.882895  672747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1024 19:36:16.887667  672747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:36:16.901941  672747 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:36:16.902037  672747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:36:16.958827  672747 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 19:36:16.958930  672747 ssh_runner.go:195] Run: which lz4
	I1024 19:36:16.963068  672747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:36:16.967232  672747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:36:16.967288  672747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1024 19:36:16.362266  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:36:16.440080  661242 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 19:36:16.440164  661242 node_ready.go:35] waiting up to 6m0s for node "pause-639553" to be "Ready" ...
	I1024 19:36:16.443917  661242 node_ready.go:49] node "pause-639553" has status "Ready":"True"
	I1024 19:36:16.443946  661242 node_ready.go:38] duration metric: took 3.765253ms waiting for node "pause-639553" to be "Ready" ...
	I1024 19:36:16.443983  661242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:16.451134  661242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.622771  661242 pod_ready.go:92] pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:16.622809  661242 pod_ready.go:81] duration metric: took 171.622719ms waiting for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.622825  661242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.022364  661242 pod_ready.go:92] pod "etcd-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.022397  661242 pod_ready.go:81] duration metric: took 399.563063ms waiting for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.022417  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.421720  661242 pod_ready.go:92] pod "kube-apiserver-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.421770  661242 pod_ready.go:81] duration metric: took 399.343736ms waiting for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.421788  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.822238  661242 pod_ready.go:92] pod "kube-controller-manager-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.822328  661242 pod_ready.go:81] duration metric: took 400.52893ms waiting for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.822417  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.220815  661242 pod_ready.go:92] pod "kube-proxy-6r7cb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:18.220845  661242 pod_ready.go:81] duration metric: took 398.416603ms waiting for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.220859  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.620911  661242 pod_ready.go:92] pod "kube-scheduler-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:18.620956  661242 pod_ready.go:81] duration metric: took 400.087866ms waiting for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.620967  661242 pod_ready.go:38] duration metric: took 2.176967945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:18.621006  661242 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:36:18.621185  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:36:18.633704  661242 api_server.go:72] duration metric: took 2.275361815s to wait for apiserver process to appear ...
	I1024 19:36:18.633741  661242 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:36:18.633767  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:18.639511  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1024 19:36:18.641563  661242 api_server.go:141] control plane version: v1.28.3
	I1024 19:36:18.641605  661242 api_server.go:131] duration metric: took 7.850024ms to wait for apiserver health ...
	I1024 19:36:18.641644  661242 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:36:18.825386  661242 system_pods.go:59] 7 kube-system pods found
	I1024 19:36:18.825424  661242 system_pods.go:61] "coredns-5dd5756b68-9m8kb" [1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3] Running
	I1024 19:36:18.825429  661242 system_pods.go:61] "etcd-pause-639553" [9000cce4-12d9-4d30-a847-437b7331ff5d] Running
	I1024 19:36:18.825433  661242 system_pods.go:61] "kindnet-j6kq7" [efda4578-700d-40de-a3f9-060bebdfddc6] Running
	I1024 19:36:18.825438  661242 system_pods.go:61] "kube-apiserver-pause-639553" [b49624f5-6926-4792-86d6-a8a07392bb1f] Running
	I1024 19:36:18.825442  661242 system_pods.go:61] "kube-controller-manager-pause-639553" [5aca0a5a-dc47-41f9-9cb4-2606c751a3e2] Running
	I1024 19:36:18.825453  661242 system_pods.go:61] "kube-proxy-6r7cb" [f30348b5-115d-4161-a406-07b8e208de06] Running
	I1024 19:36:18.825458  661242 system_pods.go:61] "kube-scheduler-pause-639553" [3b5157fb-0e71-496b-842f-44d63022e3c9] Running
	I1024 19:36:18.825465  661242 system_pods.go:74] duration metric: took 183.813094ms to wait for pod list to return data ...
	I1024 19:36:18.825473  661242 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:36:19.020319  661242 default_sa.go:45] found service account: "default"
	I1024 19:36:19.020353  661242 default_sa.go:55] duration metric: took 194.871491ms for default service account to be created ...
	I1024 19:36:19.020366  661242 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:36:19.228102  661242 system_pods.go:86] 7 kube-system pods found
	I1024 19:36:19.228286  661242 system_pods.go:89] "coredns-5dd5756b68-9m8kb" [1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3] Running
	I1024 19:36:19.228899  661242 system_pods.go:89] "etcd-pause-639553" [9000cce4-12d9-4d30-a847-437b7331ff5d] Running
	I1024 19:36:19.228980  661242 system_pods.go:89] "kindnet-j6kq7" [efda4578-700d-40de-a3f9-060bebdfddc6] Running
	I1024 19:36:19.229002  661242 system_pods.go:89] "kube-apiserver-pause-639553" [b49624f5-6926-4792-86d6-a8a07392bb1f] Running
	I1024 19:36:19.229027  661242 system_pods.go:89] "kube-controller-manager-pause-639553" [5aca0a5a-dc47-41f9-9cb4-2606c751a3e2] Running
	I1024 19:36:19.229112  661242 system_pods.go:89] "kube-proxy-6r7cb" [f30348b5-115d-4161-a406-07b8e208de06] Running
	I1024 19:36:19.229133  661242 system_pods.go:89] "kube-scheduler-pause-639553" [3b5157fb-0e71-496b-842f-44d63022e3c9] Running
	I1024 19:36:19.229166  661242 system_pods.go:126] duration metric: took 208.790589ms to wait for k8s-apps to be running ...
	I1024 19:36:19.229201  661242 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:36:19.229297  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:36:19.248522  661242 system_svc.go:56] duration metric: took 19.298431ms WaitForService to wait for kubelet.
	I1024 19:36:19.248559  661242 kubeadm.go:581] duration metric: took 2.890223976s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:36:19.248583  661242 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:36:19.421683  661242 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:36:19.421721  661242 node_conditions.go:123] node cpu capacity is 8
	I1024 19:36:19.421738  661242 node_conditions.go:105] duration metric: took 173.148432ms to run NodePressure ...
	I1024 19:36:19.421757  661242 start.go:228] waiting for startup goroutines ...
	I1024 19:36:19.421767  661242 start.go:233] waiting for cluster config update ...
	I1024 19:36:19.421788  661242 start.go:242] writing updated cluster config ...
	I1024 19:36:19.475690  661242 ssh_runner.go:195] Run: rm -f paused
	I1024 19:36:19.573898  661242 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:36:19.728498  661242 out.go:177] * Done! kubectl is now configured to use "pause-639553" cluster and "default" namespace by default
	I1024 19:36:15.873132  637871 cri.go:89] found id: ""
	I1024 19:36:15.873161  637871 logs.go:284] 0 containers: []
	W1024 19:36:15.873171  637871 logs.go:286] No container was found matching "kube-proxy"
	I1024 19:36:15.873180  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:36:15.873238  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:36:15.914034  637871 cri.go:89] found id: "ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:15.914057  637871 cri.go:89] found id: ""
	I1024 19:36:15.914065  637871 logs.go:284] 1 containers: [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f]
	I1024 19:36:15.914112  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:15.917955  637871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:36:15.918044  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:36:15.963049  637871 cri.go:89] found id: ""
	I1024 19:36:15.963086  637871 logs.go:284] 0 containers: []
	W1024 19:36:15.963098  637871 logs.go:286] No container was found matching "kindnet"
	I1024 19:36:15.963108  637871 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 19:36:15.963173  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 19:36:16.002416  637871 cri.go:89] found id: ""
	I1024 19:36:16.002451  637871 logs.go:284] 0 containers: []
	W1024 19:36:16.002459  637871 logs.go:286] No container was found matching "storage-provisioner"
	I1024 19:36:16.002473  637871 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:36:16.002492  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1024 19:36:16.069573  637871 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1024 19:36:16.069606  637871 logs.go:123] Gathering logs for kube-apiserver [02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22] ...
	I1024 19:36:16.069625  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22"
	I1024 19:36:16.121177  637871 logs.go:123] Gathering logs for kube-scheduler [3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d] ...
	I1024 19:36:16.121298  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d"
	I1024 19:36:16.210897  637871 logs.go:123] Gathering logs for kube-controller-manager [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f] ...
	I1024 19:36:16.210940  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:16.246990  637871 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:36:16.247021  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:36:16.286805  637871 logs.go:123] Gathering logs for container status ...
	I1024 19:36:16.286844  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:36:16.327629  637871 logs.go:123] Gathering logs for kubelet ...
	I1024 19:36:16.327664  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:36:16.426664  637871 logs.go:123] Gathering logs for dmesg ...
	I1024 19:36:16.426716  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:36:18.957146  637871 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1024 19:36:18.957615  637871 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1024 19:36:18.957675  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:36:18.957743  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:36:18.999164  637871 cri.go:89] found id: "02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22"
	I1024 19:36:18.999187  637871 cri.go:89] found id: ""
	I1024 19:36:18.999196  637871 logs.go:284] 1 containers: [02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22]
	I1024 19:36:18.999250  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:19.003056  637871 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:36:19.003133  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:36:19.047460  637871 cri.go:89] found id: ""
	I1024 19:36:19.047502  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.047514  637871 logs.go:286] No container was found matching "etcd"
	I1024 19:36:19.047524  637871 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:36:19.047604  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:36:19.099990  637871 cri.go:89] found id: ""
	I1024 19:36:19.100022  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.100032  637871 logs.go:286] No container was found matching "coredns"
	I1024 19:36:19.100042  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:36:19.100166  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:36:19.160959  637871 cri.go:89] found id: "3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d"
	I1024 19:36:19.160986  637871 cri.go:89] found id: ""
	I1024 19:36:19.160997  637871 logs.go:284] 1 containers: [3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d]
	I1024 19:36:19.161133  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:19.169861  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:36:19.169962  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:36:19.235322  637871 cri.go:89] found id: ""
	I1024 19:36:19.235353  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.235363  637871 logs.go:286] No container was found matching "kube-proxy"
	I1024 19:36:19.235372  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:36:19.235435  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:36:19.293902  637871 cri.go:89] found id: "ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:19.293942  637871 cri.go:89] found id: ""
	I1024 19:36:19.293954  637871 logs.go:284] 1 containers: [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f]
	I1024 19:36:19.294012  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:19.298662  637871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:36:19.298788  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:36:19.353276  637871 cri.go:89] found id: ""
	I1024 19:36:19.353310  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.353322  637871 logs.go:286] No container was found matching "kindnet"
	I1024 19:36:19.353331  637871 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 19:36:19.353404  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 19:36:19.408180  637871 cri.go:89] found id: ""
	I1024 19:36:19.408215  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.408226  637871 logs.go:286] No container was found matching "storage-provisioner"
	I1024 19:36:19.408238  637871 logs.go:123] Gathering logs for kube-controller-manager [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f] ...
	I1024 19:36:19.408257  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:19.464200  637871 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:36:19.464246  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:36:19.515800  637871 logs.go:123] Gathering logs for container status ...
	I1024 19:36:19.515845  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:36:19.575510  637871 logs.go:123] Gathering logs for kubelet ...
	I1024 19:36:19.575548  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:36:19.685547  637871 logs.go:123] Gathering logs for dmesg ...
	I1024 19:36:19.685590  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:36:19.710171  637871 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:36:19.710229  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1024 19:36:19.779590  637871 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1024 19:36:19.779616  637871 logs.go:123] Gathering logs for kube-apiserver [02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22] ...
	I1024 19:36:19.779631  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22"
	I1024 19:36:19.822616  637871 logs.go:123] Gathering logs for kube-scheduler [3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d] ...
	I1024 19:36:19.822656  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d"
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.707457917Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fd56700c08f9da51e26420aac125e3acea2720a275ed844b01d1035330976280/merged/etc/group: no such file or directory"
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.864397991Z" level=info msg="Created container 04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7: kube-system/kube-proxy-6r7cb/kube-proxy" id=5189832f-26a8-49bf-93b4-8711b20b0243 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.865044349Z" level=info msg="Starting container: 04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7" id=ab37b987-6ecd-409b-b67a-2200fa073b34 name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.865838264Z" level=info msg="Created container 5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267: kube-system/kindnet-j6kq7/kindnet-cni" id=5ebc2928-09c0-4bf9-9cd9-67c71e6007ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.866310467Z" level=info msg="Starting container: 5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267" id=8feffc93-90e6-48b9-9cf7-c5038d93410e name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.877807782Z" level=info msg="Started container" PID=4129 containerID=5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267 description=kube-system/kindnet-j6kq7/kindnet-cni id=8feffc93-90e6-48b9-9cf7-c5038d93410e name=/runtime.v1.RuntimeService/StartContainer sandboxID=25d0797df60c88cf4246a02717c363a0cc375e9632f013bf3cb154625ffc7779
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.878273294Z" level=info msg="Started container" PID=4122 containerID=04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7 description=kube-system/kube-proxy-6r7cb/kube-proxy id=ab37b987-6ecd-409b-b67a-2200fa073b34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=61ed8c57dfc2c7a865764231e323b7f3f9202e7f93ee33e69263f7088faae46d
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.885072309Z" level=info msg="Created container d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff: kube-system/coredns-5dd5756b68-9m8kb/coredns" id=605298c7-1e8c-465d-a0f2-a9c730055700 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.941740348Z" level=info msg="Starting container: d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff" id=218944f5-d115-4981-9b0c-cdd19fa3f10c name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.956221634Z" level=info msg="Started container" PID=4131 containerID=d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff description=kube-system/coredns-5dd5756b68-9m8kb/coredns id=218944f5-d115-4981-9b0c-cdd19fa3f10c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f36905aacb6af86e58836da5e018f03880704c74b13509824f71198785e645ff
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.449586374Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.460356606Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.460390063Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.460410386Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.466817331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.466849696Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.466872074Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.479363669Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.479398814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.541436620Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.549978411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.550016911Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.550043273Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.555379224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.555417841Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d98c3aa91f29b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago      Running             coredns                   2                   f36905aacb6af       coredns-5dd5756b68-9m8kb
	04179cb9f5a79       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   21 seconds ago      Running             kube-proxy                2                   61ed8c57dfc2c       kube-proxy-6r7cb
	5079118168cb9       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   21 seconds ago      Running             kindnet-cni               2                   25d0797df60c8       kindnet-j6kq7
	a9a0d3327ecdf       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   24 seconds ago      Running             kube-apiserver            2                   5a2daf0b2d617       kube-apiserver-pause-639553
	8f362f68992d1       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   24 seconds ago      Running             kube-scheduler            3                   51f2c7b0b16b5       kube-scheduler-pause-639553
	c1f1d7c3a38d9       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   24 seconds ago      Running             kube-controller-manager   3                   820612a14de06       kube-controller-manager-pause-639553
	4f763e6a35b2c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago      Running             etcd                      3                   648d9baaf7da6       etcd-pause-639553
	7556dd7e77654       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   30 seconds ago      Exited              kube-controller-manager   2                   820612a14de06       kube-controller-manager-pause-639553
	f02c9006c5461       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   36 seconds ago      Exited              kube-scheduler            2                   51f2c7b0b16b5       kube-scheduler-pause-639553
	e616aa8f6da1b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   36 seconds ago      Exited              etcd                      2                   648d9baaf7da6       etcd-pause-639553
	3e39e61ed3be1       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   49 seconds ago      Exited              kube-apiserver            1                   5a2daf0b2d617       kube-apiserver-pause-639553
	72fd13232bea6       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   49 seconds ago      Exited              kindnet-cni               1                   25d0797df60c8       kindnet-j6kq7
	d6ca43cfddca0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   50 seconds ago      Exited              coredns                   1                   f36905aacb6af       coredns-5dd5756b68-9m8kb
	2a119c4fecb6a       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   50 seconds ago      Exited              kube-proxy                1                   61ed8c57dfc2c       kube-proxy-6r7cb
	
	* 
	* ==> coredns [d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39517 - 52058 "HINFO IN 8041968620250387946.5613235457838200228. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030855643s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44919 - 17757 "HINFO IN 6207427719502831205.5883949342991463497. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029389787s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-639553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-639553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=pause-639553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_35_08_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:35:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-639553
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:36:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-639553
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	System Info:
	  Machine ID:                 36cc75f624ac4a89b6bdc2afc3b63fb5
	  System UUID:                6d3cac5e-0436-491c-b68f-ac2b4782dfce
	  Boot ID:                    f78507ce-bb13-4a64-bee1-5d653b27f216
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-9m8kb                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     62s
	  kube-system                 etcd-pause-639553                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kindnet-j6kq7                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      62s
	  kube-system                 kube-apiserver-pause-639553             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-pause-639553    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-6r7cb                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-pause-639553             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node pause-639553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node pause-639553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x8 over 82s)  kubelet          Node pause-639553 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     75s                kubelet          Node pause-639553 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  75s                kubelet          Node pause-639553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s                kubelet          Node pause-639553 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           63s                node-controller  Node pause-639553 event: Registered Node pause-639553 in Controller
	  Normal  NodeReady                61s                kubelet          Node pause-639553 status is now: NodeReady
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-639553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-639553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x8 over 26s)  kubelet          Node pause-639553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10s                node-controller  Node pause-639553 event: Registered Node pause-639553 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +4.223578] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000007] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +8.191215] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000006] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[Oct24 19:25] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000008] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +1.010575] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000006] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +2.015766] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000007] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +4.223661] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000008] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +8.191175] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000009] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[Oct24 19:28] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000012] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +1.025209] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000005] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +2.011840] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000036] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +4.067487] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000007] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +8.191280] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000007] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	
	* 
	* ==> etcd [4f763e6a35b2c086c5f7cc903f23b8afbfdf5b36caa0cbcfdc6405ca616c7028] <==
	* {"level":"info","ts":"2023-10-24T19:35:57.445295Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:35:57.450762Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T19:35:57.451099Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T19:35:57.451171Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:35:57.451331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:57.451352Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:58.386827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:58.386896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:58.386929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:58.386946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.386955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.386966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.386984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.389505Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-639553 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:35:58.389522Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:58.389699Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:58.389859Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:58.389917Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:58.391361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-24T19:35:58.391541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:36:09.341635Z","caller":"traceutil/trace.go:171","msg":"trace[2059745744] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:520; }","duration":"102.348208ms","start":"2023-10-24T19:36:09.239266Z","end":"2023-10-24T19:36:09.341615Z","steps":["trace[2059745744] 'read index received'  (duration: 38.823201ms)","trace[2059745744] 'applied index is now lower than readState.Index'  (duration: 63.522093ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:36:09.341873Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.610738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-639553\" ","response":"range_response_count:1 size:5458"}
	{"level":"info","ts":"2023-10-24T19:36:09.34199Z","caller":"traceutil/trace.go:171","msg":"trace[1786836361] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-639553; range_end:; response_count:1; response_revision:485; }","duration":"102.743996ms","start":"2023-10-24T19:36:09.239235Z","end":"2023-10-24T19:36:09.341979Z","steps":["trace[1786836361] 'agreement among raft nodes before linearized reading'  (duration: 102.551213ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:36:09.34198Z","caller":"traceutil/trace.go:171","msg":"trace[2131638994] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"154.880532ms","start":"2023-10-24T19:36:09.187066Z","end":"2023-10-24T19:36:09.341947Z","steps":["trace[2131638994] 'process raft request'  (duration: 90.907764ms)","trace[2131638994] 'compare'  (duration: 63.535445ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:36:21.740415Z","caller":"traceutil/trace.go:171","msg":"trace[1774707635] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"103.904947ms","start":"2023-10-24T19:36:21.636485Z","end":"2023-10-24T19:36:21.74039Z","steps":["trace[1774707635] 'process raft request'  (duration: 68.914655ms)","trace[1774707635] 'compare'  (duration: 34.852859ms)"],"step_count":2}
	
	* 
	* ==> etcd [e616aa8f6da1b319d518f5a6de368ac08f1e1a4e9122121d273a6594f58b381a] <==
	* {"level":"info","ts":"2023-10-24T19:35:45.701276Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:35:47.090811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:47.090916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:47.090934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:47.090951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.090958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.090968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.090977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.092986Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-639553 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:35:47.092994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:47.093045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:47.093216Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:47.093244Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:47.094379Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:35:47.094601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-24T19:35:54.325976Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-24T19:35:54.326029Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-639553","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-24T19:35:54.326132Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:35:54.326155Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:35:54.327601Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:35:54.327647Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-24T19:35:54.327696Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-24T19:35:54.330322Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:54.330466Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:54.330485Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-639553","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  19:36:23 up  3:18,  0 users,  load average: 5.60, 3.68, 2.18
	Linux pause-639553 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267] <==
	* I1024 19:36:01.043676       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1024 19:36:01.044601       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1024 19:36:01.045030       1 main.go:116] setting mtu 1500 for CNI 
	I1024 19:36:01.045120       1 main.go:146] kindnetd IP family: "ipv4"
	I1024 19:36:01.045185       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1024 19:36:01.449153       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1024 19:36:01.449193       1 main.go:227] handling current node
	I1024 19:36:11.555873       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1024 19:36:11.555905       1 main.go:227] handling current node
	I1024 19:36:21.569412       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1024 19:36:21.569458       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917] <==
	* I1024 19:35:32.555642       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1024 19:35:32.555903       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1024 19:35:32.641429       1 main.go:116] setting mtu 1500 for CNI 
	I1024 19:35:32.641719       1 main.go:146] kindnetd IP family: "ipv4"
	I1024 19:35:32.641810       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1024 19:35:33.041444       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 19:35:37.559820       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 19:35:37.562242       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 19:35:37.578331       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [a9a0d3327ecdf7dd07b74e853bdad7048539ccae49bfb91a8f30e092b882e4b4] <==
	* I1024 19:36:00.209330       1 controller.go:85] Starting OpenAPI V3 controller
	I1024 19:36:00.209372       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1024 19:36:00.209383       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1024 19:36:00.209739       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1024 19:36:00.209718       1 controller.go:78] Starting OpenAPI AggregationController
	I1024 19:36:00.341056       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 19:36:00.341277       1 aggregator.go:166] initial CRD sync complete...
	I1024 19:36:00.341326       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 19:36:00.341366       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 19:36:00.343150       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 19:36:00.358842       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:36:00.365424       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1024 19:36:00.443246       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:36:00.443482       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:36:00.443676       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:36:00.443774       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 19:36:00.443741       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1024 19:36:00.444716       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1024 19:36:00.464476       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1024 19:36:01.214519       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 19:36:02.396143       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1024 19:36:02.554437       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1024 19:36:02.567218       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1024 19:36:02.664072       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:36:02.675292       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [7556dd7e77654b82a85d9084c0ecdd4d2247163f51098b477845e37c6b4832b7] <==
	* I1024 19:35:52.629191       1 serving.go:348] Generated self-signed cert in-memory
	I1024 19:35:52.888182       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1024 19:35:52.888217       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:35:52.889560       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1024 19:35:52.889639       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1024 19:35:52.890346       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1024 19:35:52.890606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [c1f1d7c3a38d9cb392fc7bd632bb227616cf1a0dd698730a50985bca0b466ce1] <==
	* I1024 19:36:12.603073       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1024 19:36:12.605274       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1024 19:36:12.605326       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1024 19:36:12.607987       1 shared_informer.go:318] Caches are synced for GC
	I1024 19:36:12.610025       1 shared_informer.go:318] Caches are synced for crt configmap
	I1024 19:36:12.612367       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1024 19:36:12.612765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="216.795µs"
	I1024 19:36:12.614835       1 shared_informer.go:318] Caches are synced for cronjob
	I1024 19:36:12.619189       1 shared_informer.go:318] Caches are synced for endpoint
	I1024 19:36:12.619306       1 shared_informer.go:318] Caches are synced for disruption
	I1024 19:36:12.626860       1 shared_informer.go:318] Caches are synced for stateful set
	I1024 19:36:12.664183       1 shared_informer.go:318] Caches are synced for attach detach
	I1024 19:36:12.711746       1 shared_informer.go:318] Caches are synced for daemon sets
	I1024 19:36:12.739943       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:36:12.797325       1 shared_informer.go:318] Caches are synced for taint
	I1024 19:36:12.797407       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1024 19:36:12.797514       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1024 19:36:12.797548       1 taint_manager.go:211] "Sending events to api server"
	I1024 19:36:12.797674       1 event.go:307] "Event occurred" object="pause-639553" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-639553 event: Registered Node pause-639553 in Controller"
	I1024 19:36:12.797689       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-639553"
	I1024 19:36:12.797876       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1024 19:36:12.805145       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:36:13.133472       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:36:13.191098       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:36:13.191136       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7] <==
	* I1024 19:36:00.990470       1 server_others.go:69] "Using iptables proxy"
	I1024 19:36:01.050703       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1024 19:36:01.106319       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:36:01.109211       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:36:01.109298       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:36:01.109308       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:36:01.109337       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:36:01.109547       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:36:01.109789       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:36:01.110445       1 config.go:188] "Starting service config controller"
	I1024 19:36:01.110474       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:36:01.110510       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:36:01.110513       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:36:01.111030       1 config.go:315] "Starting node config controller"
	I1024 19:36:01.116227       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:36:01.116261       1 shared_informer.go:318] Caches are synced for node config
	I1024 19:36:01.211550       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:36:01.211551       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda] <==
	* I1024 19:35:32.678468       1 server_others.go:69] "Using iptables proxy"
	E1024 19:35:32.742039       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-639553": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [8f362f68992d1d720b793dddcaf2439b2749b610398c9a8f56c9b870d75a37fd] <==
	* I1024 19:35:58.142049       1 serving.go:348] Generated self-signed cert in-memory
	W1024 19:36:00.261709       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 19:36:00.261824       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:36:00.261895       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 19:36:00.261929       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 19:36:00.351836       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 19:36:00.351975       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:36:00.354854       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:36:00.354964       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:36:00.355449       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 19:36:00.355599       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:36:00.459022       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f02c9006c5461fdb26a7158b616cd24749daedea6b0c4d0066c0016c947d9fe6] <==
	* W1024 19:35:50.589133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.589215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.718403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.718470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.810982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.811031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.906726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.906777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.994911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.994974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:51.104285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:51.104331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:51.187596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:51.187635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:52.796768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:52.796983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:53.444707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:53.444877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:53.631100       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:53.631145       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:53.827675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:53.827757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:54.174228       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E1024 19:35:54.174456       1 run.go:74] "command failed" err="finished without leader elect"
	E1024 19:35:54.174486       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 24 19:35:57 pause-639553 kubelet[3790]: E1024 19:35:57.180765    3790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: W1024 19:35:57.292292    3790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: E1024 19:35:57.292396    3790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: W1024 19:35:57.341774    3790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: E1024 19:35:57.341909    3790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: I1024 19:35:57.848523    3790 kubelet_node_status.go:70] "Attempting to register node" node="pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.344678    3790 apiserver.go:52] "Watching apiserver"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.357771    3790 topology_manager.go:215] "Topology Admit Handler" podUID="f30348b5-115d-4161-a406-07b8e208de06" podNamespace="kube-system" podName="kube-proxy-6r7cb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.357931    3790 topology_manager.go:215] "Topology Admit Handler" podUID="1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3" podNamespace="kube-system" podName="coredns-5dd5756b68-9m8kb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.358017    3790 topology_manager.go:215] "Topology Admit Handler" podUID="efda4578-700d-40de-a3f9-060bebdfddc6" podNamespace="kube-system" podName="kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.441445    3790 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445698    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/efda4578-700d-40de-a3f9-060bebdfddc6-cni-cfg\") pod \"kindnet-j6kq7\" (UID: \"efda4578-700d-40de-a3f9-060bebdfddc6\") " pod="kube-system/kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445764    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efda4578-700d-40de-a3f9-060bebdfddc6-xtables-lock\") pod \"kindnet-j6kq7\" (UID: \"efda4578-700d-40de-a3f9-060bebdfddc6\") " pod="kube-system/kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445797    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efda4578-700d-40de-a3f9-060bebdfddc6-lib-modules\") pod \"kindnet-j6kq7\" (UID: \"efda4578-700d-40de-a3f9-060bebdfddc6\") " pod="kube-system/kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445853    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f30348b5-115d-4161-a406-07b8e208de06-xtables-lock\") pod \"kube-proxy-6r7cb\" (UID: \"f30348b5-115d-4161-a406-07b8e208de06\") " pod="kube-system/kube-proxy-6r7cb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445884    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f30348b5-115d-4161-a406-07b8e208de06-lib-modules\") pod \"kube-proxy-6r7cb\" (UID: \"f30348b5-115d-4161-a406-07b8e208de06\") " pod="kube-system/kube-proxy-6r7cb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.465388    3790 kubelet_node_status.go:108] "Node was previously registered" node="pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.465549    3790 kubelet_node_status.go:73] "Successfully registered node" node="pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.467724    3790 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.469559    3790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: E1024 19:36:00.545191    3790 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-639553\" already exists" pod="kube-system/kube-apiserver-pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.661002    3790 scope.go:117] "RemoveContainer" containerID="72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.662576    3790 scope.go:117] "RemoveContainer" containerID="2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.662784    3790 scope.go:117] "RemoveContainer" containerID="d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311"
	Oct 24 19:36:07 pause-639553 kubelet[3790]: I1024 19:36:07.898919    3790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-639553 -n pause-639553
helpers_test.go:261: (dbg) Run:  kubectl --context pause-639553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-639553
helpers_test.go:235: (dbg) docker inspect pause-639553:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8",
	        "Created": "2023-10-24T19:34:51.838222613Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 652148,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:34:52.234192848Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3e615aae66792e89a7d2c001b5c02b5e78a999706d53f7c8dbfcff1520487fdd",
	        "ResolvConfPath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8/177ceb9a18a56f63e645d3de174329e419b3d62b6ffcf27038798a3e9baaf3b8-json.log",
	        "Name": "/pause-639553",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-639553:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-639553",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f-init/diff:/var/lib/docker/overlay2/a59d6c70e56c008d6cc4bbed94412eb512943c9d608e3d99495b95d6ce6d39c5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469079a86b614a9aea4d715835e842bc97ed85c83b163de92d6467ba3a715f4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-639553",
	                "Source": "/var/lib/docker/volumes/pause-639553/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-639553",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-639553",
	                "name.minikube.sigs.k8s.io": "pause-639553",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "273f0029b354cfdc90f45b7aa5ea4205a2d8236614b978476430ef54c607f839",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/273f0029b354",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-639553": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "177ceb9a18a5",
	                        "pause-639553"
	                    ],
	                    "NetworkID": "00acab23e15c963cecb1fb5ff67797797bf3e5234194be048959b87d2895fc30",
	                    "EndpointID": "e8c1dc729b57439b76bcabab6194880c7c461918afa6793605cd41d2567b28b9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-639553 -n pause-639553
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-639553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-639553 logs -n 25: (1.795461858s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo docker                         | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo cat                            | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo                                | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo find                           | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-973203 sudo crio                           | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-973203                                     | cilium-973203             | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC | 24 Oct 23 19:35 UTC |
	| start   | -p force-systemd-flag-453049                         | force-systemd-flag-453049 | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC | 24 Oct 23 19:36 UTC |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | cert-options-742303 ssh                              | cert-options-742303       | jenkins | v1.31.2 | 24 Oct 23 19:35 UTC | 24 Oct 23 19:36 UTC |
	|         | openssl x509 -text -noout -in                        |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |         |         |                     |                     |
	| ssh     | -p cert-options-742303 -- sudo                       | cert-options-742303       | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |         |         |                     |                     |
	| delete  | -p cert-options-742303                               | cert-options-742303       | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	| start   | -p old-k8s-version-880692                            | old-k8s-version-880692    | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-453049 ssh cat                    | force-systemd-flag-453049 | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                   |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-453049                         | force-systemd-flag-453049 | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC | 24 Oct 23 19:36 UTC |
	| start   | -p no-preload-539193                                 | no-preload-539193         | jenkins | v1.31.2 | 24 Oct 23 19:36 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                          |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:36:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:36:17.660656  676108 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:36:17.660972  676108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:36:17.660980  676108 out.go:309] Setting ErrFile to fd 2...
	I1024 19:36:17.660988  676108 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:36:17.661546  676108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:36:17.662556  676108 out.go:303] Setting JSON to false
	I1024 19:36:17.665578  676108 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11925,"bootTime":1698164253,"procs":582,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:36:17.665777  676108 start.go:138] virtualization: kvm guest
	I1024 19:36:17.669687  676108 out.go:177] * [no-preload-539193] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:36:17.672927  676108 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:36:17.672877  676108 notify.go:220] Checking for updates...
	I1024 19:36:17.675075  676108 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:36:17.676931  676108 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:36:17.678947  676108 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:36:17.680645  676108 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:36:17.682310  676108 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:36:17.684466  676108 config.go:182] Loaded profile config "kubernetes-upgrade-830809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:36:17.684645  676108 config.go:182] Loaded profile config "old-k8s-version-880692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1024 19:36:17.684910  676108 config.go:182] Loaded profile config "pause-639553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:36:17.685017  676108 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:36:17.718823  676108 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:36:17.719053  676108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:36:17.811828  676108 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-24 19:36:17.795518548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:36:17.812036  676108 docker.go:295] overlay module found
	I1024 19:36:17.815995  676108 out.go:177] * Using the docker driver based on user configuration
	I1024 19:36:17.818612  676108 start.go:298] selected driver: docker
	I1024 19:36:17.818646  676108 start.go:902] validating driver "docker" against <nil>
	I1024 19:36:17.818668  676108 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:36:17.820345  676108 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:36:17.915040  676108 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-24 19:36:17.902933019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:36:17.915267  676108 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:36:17.915577  676108 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:36:17.917866  676108 out.go:177] * Using Docker driver with root privileges
	I1024 19:36:17.919839  676108 cni.go:84] Creating CNI manager for ""
	I1024 19:36:17.919878  676108 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:36:17.919897  676108 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:36:17.919928  676108 start_flags.go:323] config:
	{Name:no-preload-539193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-539193 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:36:17.922129  676108 out.go:177] * Starting control plane node no-preload-539193 in cluster no-preload-539193
	I1024 19:36:17.923839  676108 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:36:17.925739  676108 out.go:177] * Pulling base image ...
	I1024 19:36:17.927709  676108 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:36:17.927908  676108 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:36:17.928013  676108 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/no-preload-539193/config.json ...
	I1024 19:36:17.928096  676108 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/no-preload-539193/config.json: {Name:mk3400da1cb6b7f60baffbcba34882393496de52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:36:17.928217  676108 cache.go:107] acquiring lock: {Name:mk23591311b66e09432581f0a19b8da3091dab5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928284  676108 cache.go:107] acquiring lock: {Name:mk5b5adc26d51a7eeb5339b5f64d63ce79b5d757 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928335  676108 cache.go:107] acquiring lock: {Name:mk1d5242b0bb7a3a2f80b3ab514fa4cedca6e935 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928358  676108 cache.go:115] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 19:36:17.928288  676108 cache.go:107] acquiring lock: {Name:mkaa50fc513899a38b4a3875889084928cee2bcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928405  676108 cache.go:107] acquiring lock: {Name:mk604b9526e4448eb90f0c1a95f6ae7c3da4cddc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928508  676108 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 19:36:17.928496  676108 cache.go:107] acquiring lock: {Name:mkd38e8a41cc48d7f58bbc493a23bf637325b72d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928563  676108 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 19:36:17.928567  676108 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I1024 19:36:17.928614  676108 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1024 19:36:17.928224  676108 cache.go:107] acquiring lock: {Name:mk5182f83699bdccac2fab0c36cc1e8590cc670c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.928380  676108 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 200.305µs
	I1024 19:36:17.928766  676108 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 19:36:17.928504  676108 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 19:36:17.928851  676108 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 19:36:17.929714  676108 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1024 19:36:17.929743  676108 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.3
	I1024 19:36:17.929800  676108 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I1024 19:36:17.929886  676108 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.3
	I1024 19:36:17.929920  676108 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.3
	I1024 19:36:17.930268  676108 cache.go:107] acquiring lock: {Name:mkb7382367c30a8818525ba9899979e6526210a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.930468  676108 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 19:36:17.931652  676108 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I1024 19:36:17.931750  676108 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.3
	I1024 19:36:17.962219  676108 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:36:17.962301  676108 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:36:17.962342  676108 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:36:17.962425  676108 start.go:365] acquiring machines lock for no-preload-539193: {Name:mk7f4f8343db834aa651184658c69a40f5e62fbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:36:17.962631  676108 start.go:369] acquired machines lock for "no-preload-539193" in 164.529µs
	I1024 19:36:17.962680  676108 start.go:93] Provisioning new machine with config: &{Name:no-preload-539193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-539193 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:36:17.962927  676108 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:36:16.860962  672747 cli_runner.go:164] Run: docker network inspect old-k8s-version-880692 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:36:16.882895  672747 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1024 19:36:16.887667  672747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:36:16.901941  672747 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:36:16.902037  672747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:36:16.958827  672747 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 19:36:16.958930  672747 ssh_runner.go:195] Run: which lz4
	I1024 19:36:16.963068  672747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:36:16.967232  672747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:36:16.967288  672747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I1024 19:36:16.362266  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:36:16.440080  661242 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 19:36:16.440164  661242 node_ready.go:35] waiting up to 6m0s for node "pause-639553" to be "Ready" ...
	I1024 19:36:16.443917  661242 node_ready.go:49] node "pause-639553" has status "Ready":"True"
	I1024 19:36:16.443946  661242 node_ready.go:38] duration metric: took 3.765253ms waiting for node "pause-639553" to be "Ready" ...
	I1024 19:36:16.443983  661242 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:16.451134  661242 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.622771  661242 pod_ready.go:92] pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:16.622809  661242 pod_ready.go:81] duration metric: took 171.622719ms waiting for pod "coredns-5dd5756b68-9m8kb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:16.622825  661242 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.022364  661242 pod_ready.go:92] pod "etcd-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.022397  661242 pod_ready.go:81] duration metric: took 399.563063ms waiting for pod "etcd-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.022417  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.421720  661242 pod_ready.go:92] pod "kube-apiserver-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.421770  661242 pod_ready.go:81] duration metric: took 399.343736ms waiting for pod "kube-apiserver-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.421788  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.822238  661242 pod_ready.go:92] pod "kube-controller-manager-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:17.822328  661242 pod_ready.go:81] duration metric: took 400.52893ms waiting for pod "kube-controller-manager-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:17.822417  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.220815  661242 pod_ready.go:92] pod "kube-proxy-6r7cb" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:18.220845  661242 pod_ready.go:81] duration metric: took 398.416603ms waiting for pod "kube-proxy-6r7cb" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.220859  661242 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.620911  661242 pod_ready.go:92] pod "kube-scheduler-pause-639553" in "kube-system" namespace has status "Ready":"True"
	I1024 19:36:18.620956  661242 pod_ready.go:81] duration metric: took 400.087866ms waiting for pod "kube-scheduler-pause-639553" in "kube-system" namespace to be "Ready" ...
	I1024 19:36:18.620967  661242 pod_ready.go:38] duration metric: took 2.176967945s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:36:18.621006  661242 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:36:18.621185  661242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:36:18.633704  661242 api_server.go:72] duration metric: took 2.275361815s to wait for apiserver process to appear ...
	I1024 19:36:18.633741  661242 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:36:18.633767  661242 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1024 19:36:18.639511  661242 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1024 19:36:18.641563  661242 api_server.go:141] control plane version: v1.28.3
	I1024 19:36:18.641605  661242 api_server.go:131] duration metric: took 7.850024ms to wait for apiserver health ...
	I1024 19:36:18.641644  661242 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:36:18.825386  661242 system_pods.go:59] 7 kube-system pods found
	I1024 19:36:18.825424  661242 system_pods.go:61] "coredns-5dd5756b68-9m8kb" [1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3] Running
	I1024 19:36:18.825429  661242 system_pods.go:61] "etcd-pause-639553" [9000cce4-12d9-4d30-a847-437b7331ff5d] Running
	I1024 19:36:18.825433  661242 system_pods.go:61] "kindnet-j6kq7" [efda4578-700d-40de-a3f9-060bebdfddc6] Running
	I1024 19:36:18.825438  661242 system_pods.go:61] "kube-apiserver-pause-639553" [b49624f5-6926-4792-86d6-a8a07392bb1f] Running
	I1024 19:36:18.825442  661242 system_pods.go:61] "kube-controller-manager-pause-639553" [5aca0a5a-dc47-41f9-9cb4-2606c751a3e2] Running
	I1024 19:36:18.825453  661242 system_pods.go:61] "kube-proxy-6r7cb" [f30348b5-115d-4161-a406-07b8e208de06] Running
	I1024 19:36:18.825458  661242 system_pods.go:61] "kube-scheduler-pause-639553" [3b5157fb-0e71-496b-842f-44d63022e3c9] Running
	I1024 19:36:18.825465  661242 system_pods.go:74] duration metric: took 183.813094ms to wait for pod list to return data ...
	I1024 19:36:18.825473  661242 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:36:19.020319  661242 default_sa.go:45] found service account: "default"
	I1024 19:36:19.020353  661242 default_sa.go:55] duration metric: took 194.871491ms for default service account to be created ...
	I1024 19:36:19.020366  661242 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:36:19.228102  661242 system_pods.go:86] 7 kube-system pods found
	I1024 19:36:19.228286  661242 system_pods.go:89] "coredns-5dd5756b68-9m8kb" [1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3] Running
	I1024 19:36:19.228899  661242 system_pods.go:89] "etcd-pause-639553" [9000cce4-12d9-4d30-a847-437b7331ff5d] Running
	I1024 19:36:19.228980  661242 system_pods.go:89] "kindnet-j6kq7" [efda4578-700d-40de-a3f9-060bebdfddc6] Running
	I1024 19:36:19.229002  661242 system_pods.go:89] "kube-apiserver-pause-639553" [b49624f5-6926-4792-86d6-a8a07392bb1f] Running
	I1024 19:36:19.229027  661242 system_pods.go:89] "kube-controller-manager-pause-639553" [5aca0a5a-dc47-41f9-9cb4-2606c751a3e2] Running
	I1024 19:36:19.229112  661242 system_pods.go:89] "kube-proxy-6r7cb" [f30348b5-115d-4161-a406-07b8e208de06] Running
	I1024 19:36:19.229133  661242 system_pods.go:89] "kube-scheduler-pause-639553" [3b5157fb-0e71-496b-842f-44d63022e3c9] Running
	I1024 19:36:19.229166  661242 system_pods.go:126] duration metric: took 208.790589ms to wait for k8s-apps to be running ...
	I1024 19:36:19.229201  661242 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:36:19.229297  661242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:36:19.248522  661242 system_svc.go:56] duration metric: took 19.298431ms WaitForService to wait for kubelet.
	I1024 19:36:19.248559  661242 kubeadm.go:581] duration metric: took 2.890223976s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:36:19.248583  661242 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:36:19.421683  661242 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1024 19:36:19.421721  661242 node_conditions.go:123] node cpu capacity is 8
	I1024 19:36:19.421738  661242 node_conditions.go:105] duration metric: took 173.148432ms to run NodePressure ...
	I1024 19:36:19.421757  661242 start.go:228] waiting for startup goroutines ...
	I1024 19:36:19.421767  661242 start.go:233] waiting for cluster config update ...
	I1024 19:36:19.421788  661242 start.go:242] writing updated cluster config ...
	I1024 19:36:19.475690  661242 ssh_runner.go:195] Run: rm -f paused
	I1024 19:36:19.573898  661242 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:36:19.728498  661242 out.go:177] * Done! kubectl is now configured to use "pause-639553" cluster and "default" namespace by default
	I1024 19:36:15.873132  637871 cri.go:89] found id: ""
	I1024 19:36:15.873161  637871 logs.go:284] 0 containers: []
	W1024 19:36:15.873171  637871 logs.go:286] No container was found matching "kube-proxy"
	I1024 19:36:15.873180  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:36:15.873238  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:36:15.914034  637871 cri.go:89] found id: "ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:15.914057  637871 cri.go:89] found id: ""
	I1024 19:36:15.914065  637871 logs.go:284] 1 containers: [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f]
	I1024 19:36:15.914112  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:15.917955  637871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:36:15.918044  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:36:15.963049  637871 cri.go:89] found id: ""
	I1024 19:36:15.963086  637871 logs.go:284] 0 containers: []
	W1024 19:36:15.963098  637871 logs.go:286] No container was found matching "kindnet"
	I1024 19:36:15.963108  637871 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 19:36:15.963173  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 19:36:16.002416  637871 cri.go:89] found id: ""
	I1024 19:36:16.002451  637871 logs.go:284] 0 containers: []
	W1024 19:36:16.002459  637871 logs.go:286] No container was found matching "storage-provisioner"
	I1024 19:36:16.002473  637871 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:36:16.002492  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1024 19:36:16.069573  637871 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1024 19:36:16.069606  637871 logs.go:123] Gathering logs for kube-apiserver [02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22] ...
	I1024 19:36:16.069625  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22"
	I1024 19:36:16.121177  637871 logs.go:123] Gathering logs for kube-scheduler [3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d] ...
	I1024 19:36:16.121298  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d"
	I1024 19:36:16.210897  637871 logs.go:123] Gathering logs for kube-controller-manager [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f] ...
	I1024 19:36:16.210940  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:16.246990  637871 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:36:16.247021  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:36:16.286805  637871 logs.go:123] Gathering logs for container status ...
	I1024 19:36:16.286844  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:36:16.327629  637871 logs.go:123] Gathering logs for kubelet ...
	I1024 19:36:16.327664  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:36:16.426664  637871 logs.go:123] Gathering logs for dmesg ...
	I1024 19:36:16.426716  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:36:18.957146  637871 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1024 19:36:18.957615  637871 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1024 19:36:18.957675  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:36:18.957743  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:36:18.999164  637871 cri.go:89] found id: "02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22"
	I1024 19:36:18.999187  637871 cri.go:89] found id: ""
	I1024 19:36:18.999196  637871 logs.go:284] 1 containers: [02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22]
	I1024 19:36:18.999250  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:19.003056  637871 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:36:19.003133  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:36:19.047460  637871 cri.go:89] found id: ""
	I1024 19:36:19.047502  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.047514  637871 logs.go:286] No container was found matching "etcd"
	I1024 19:36:19.047524  637871 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:36:19.047604  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:36:19.099990  637871 cri.go:89] found id: ""
	I1024 19:36:19.100022  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.100032  637871 logs.go:286] No container was found matching "coredns"
	I1024 19:36:19.100042  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:36:19.100166  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:36:19.160959  637871 cri.go:89] found id: "3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d"
	I1024 19:36:19.160986  637871 cri.go:89] found id: ""
	I1024 19:36:19.160997  637871 logs.go:284] 1 containers: [3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d]
	I1024 19:36:19.161133  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:19.169861  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:36:19.169962  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:36:19.235322  637871 cri.go:89] found id: ""
	I1024 19:36:19.235353  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.235363  637871 logs.go:286] No container was found matching "kube-proxy"
	I1024 19:36:19.235372  637871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:36:19.235435  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:36:19.293902  637871 cri.go:89] found id: "ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:19.293942  637871 cri.go:89] found id: ""
	I1024 19:36:19.293954  637871 logs.go:284] 1 containers: [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f]
	I1024 19:36:19.294012  637871 ssh_runner.go:195] Run: which crictl
	I1024 19:36:19.298662  637871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:36:19.298788  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:36:19.353276  637871 cri.go:89] found id: ""
	I1024 19:36:19.353310  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.353322  637871 logs.go:286] No container was found matching "kindnet"
	I1024 19:36:19.353331  637871 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1024 19:36:19.353404  637871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1024 19:36:19.408180  637871 cri.go:89] found id: ""
	I1024 19:36:19.408215  637871 logs.go:284] 0 containers: []
	W1024 19:36:19.408226  637871 logs.go:286] No container was found matching "storage-provisioner"
	I1024 19:36:19.408238  637871 logs.go:123] Gathering logs for kube-controller-manager [ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f] ...
	I1024 19:36:19.408257  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba38fcd59d272ab06a889867e17a0a1baa358df4946b6c0af3aafab972dddd2f"
	I1024 19:36:19.464200  637871 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:36:19.464246  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:36:19.515800  637871 logs.go:123] Gathering logs for container status ...
	I1024 19:36:19.515845  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:36:19.575510  637871 logs.go:123] Gathering logs for kubelet ...
	I1024 19:36:19.575548  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:36:19.685547  637871 logs.go:123] Gathering logs for dmesg ...
	I1024 19:36:19.685590  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:36:19.710171  637871 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:36:19.710229  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1024 19:36:19.779590  637871 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1024 19:36:19.779616  637871 logs.go:123] Gathering logs for kube-apiserver [02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22] ...
	I1024 19:36:19.779631  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02e1b5c6d58a162b2566232366139ba44bf45ce1ee164eb32a61df61c01b4e22"
	I1024 19:36:19.822616  637871 logs.go:123] Gathering logs for kube-scheduler [3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d] ...
	I1024 19:36:19.822656  637871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a466ebf5e39e3587a5ca76cc8a5808ab641b67d8ee57c1baa7719999cb2591d"
	I1024 19:36:17.965884  676108 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1024 19:36:17.966273  676108 start.go:159] libmachine.API.Create for "no-preload-539193" (driver="docker")
	I1024 19:36:17.966325  676108 client.go:168] LocalClient.Create starting
	I1024 19:36:17.966625  676108 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/ca.pem
	I1024 19:36:17.966704  676108 main.go:141] libmachine: Decoding PEM data...
	I1024 19:36:17.966735  676108 main.go:141] libmachine: Parsing certificate...
	I1024 19:36:17.966830  676108 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-471553/.minikube/certs/cert.pem
	I1024 19:36:17.966866  676108 main.go:141] libmachine: Decoding PEM data...
	I1024 19:36:17.966886  676108 main.go:141] libmachine: Parsing certificate...
	I1024 19:36:17.967411  676108 cli_runner.go:164] Run: docker network inspect no-preload-539193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:36:17.995938  676108 cli_runner.go:211] docker network inspect no-preload-539193 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:36:17.996098  676108 network_create.go:281] running [docker network inspect no-preload-539193] to gather additional debugging logs...
	I1024 19:36:17.996139  676108 cli_runner.go:164] Run: docker network inspect no-preload-539193
	W1024 19:36:18.021591  676108 cli_runner.go:211] docker network inspect no-preload-539193 returned with exit code 1
	I1024 19:36:18.021633  676108 network_create.go:284] error running [docker network inspect no-preload-539193]: docker network inspect no-preload-539193: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-539193 not found
	I1024 19:36:18.021658  676108 network_create.go:286] output of [docker network inspect no-preload-539193]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-539193 not found
	
	** /stderr **
	I1024 19:36:18.021792  676108 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:36:18.045758  676108 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7cb31ca22f4a IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:85:f3:ac:06} reservation:<nil>}
	I1024 19:36:18.047138  676108 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee0293b013f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:6b:29:92:51} reservation:<nil>}
	I1024 19:36:18.048643  676108 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-00acab23e15c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:9a:c7:7e:9e} reservation:<nil>}
	I1024 19:36:18.050093  676108 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4697fb22f636 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:1d:ee:30:c9} reservation:<nil>}
	I1024 19:36:18.051193  676108 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-cfd8e96fd03f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:b9:17:48:17} reservation:<nil>}
	I1024 19:36:18.052697  676108 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0030bd150}
	I1024 19:36:18.052742  676108 network_create.go:124] attempt to create docker network no-preload-539193 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1024 19:36:18.052819  676108 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-539193 no-preload-539193
	I1024 19:36:18.112682  676108 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I1024 19:36:18.114829  676108 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I1024 19:36:18.116185  676108 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3
	I1024 19:36:18.116321  676108 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I1024 19:36:18.118126  676108 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3
	I1024 19:36:18.150169  676108 network_create.go:108] docker network no-preload-539193 192.168.94.0/24 created
	I1024 19:36:18.150238  676108 kic.go:118] calculated static IP "192.168.94.2" for the "no-preload-539193" container
	I1024 19:36:18.150412  676108 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:36:18.162456  676108 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3
	I1024 19:36:18.182881  676108 cache.go:157] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1024 19:36:18.182925  676108 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 254.437025ms
	I1024 19:36:18.182942  676108 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1024 19:36:18.183355  676108 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3
	I1024 19:36:18.183625  676108 cli_runner.go:164] Run: docker volume create no-preload-539193 --label name.minikube.sigs.k8s.io=no-preload-539193 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:36:18.207622  676108 oci.go:103] Successfully created a docker volume no-preload-539193
	I1024 19:36:18.207711  676108 cli_runner.go:164] Run: docker run --rm --name no-preload-539193-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-539193 --entrypoint /usr/bin/test -v no-preload-539193:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:36:18.358405  676108 cache.go:157] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1024 19:36:18.358435  676108 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 430.034755ms
	I1024 19:36:18.358446  676108 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1024 19:36:18.527937  676108 cache.go:157] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 exists
	I1024 19:36:18.527974  676108 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.3" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3" took 599.798004ms
	I1024 19:36:18.527987  676108 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.3 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 succeeded
	I1024 19:36:19.254008  676108 cache.go:157] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 exists
	I1024 19:36:19.254039  676108 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.3" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3" took 1.325751077s
	I1024 19:36:19.254057  676108 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.3 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 succeeded
	I1024 19:36:19.601333  676108 cache.go:157] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 exists
	I1024 19:36:19.601386  676108 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.3" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3" took 1.673157416s
	I1024 19:36:19.601402  676108 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.3 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 succeeded
	I1024 19:36:19.604888  676108 cache.go:157] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 exists
	I1024 19:36:19.604914  676108 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.3" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3" took 1.674689691s
	I1024 19:36:19.604944  676108 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.3 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 succeeded
	I1024 19:36:20.242094  676108 cache.go:157] /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 exists
	I1024 19:36:20.242141  676108 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0" took 2.31380926s
	I1024 19:36:20.242161  676108 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I1024 19:36:20.242211  676108 cache.go:87] Successfully saved all images to host disk.
	I1024 19:36:18.204583  672747 crio.go:444] Took 1.241552 seconds to copy over tarball
	I1024 19:36:18.204671  672747 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:36:22.084641  672747 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.879936149s)
	I1024 19:36:22.084672  672747 crio.go:451] Took 3.880058 seconds to extract the tarball
	I1024 19:36:22.084707  672747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:36:22.198599  672747 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:36:22.243535  672747 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I1024 19:36:22.243580  672747 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 19:36:22.243697  672747 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:36:22.243734  672747 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 19:36:22.243835  672747 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 19:36:22.243875  672747 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I1024 19:36:22.243888  672747 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 19:36:22.243834  672747 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 19:36:22.243897  672747 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I1024 19:36:22.243877  672747 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1024 19:36:22.246197  672747 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 19:36:22.246241  672747 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I1024 19:36:22.246313  672747 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 19:36:22.246339  672747 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I1024 19:36:22.246462  672747 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1024 19:36:22.246687  672747 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 19:36:22.246726  672747 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 19:36:22.246693  672747 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:36:22.426320  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I1024 19:36:22.426813  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I1024 19:36:22.446953  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1024 19:36:22.461323  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I1024 19:36:22.465274  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 19:36:22.483488  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I1024 19:36:22.485500  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I1024 19:36:22.485958  672747 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:36:22.489920  672747 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I1024 19:36:22.489976  672747 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I1024 19:36:22.490028  672747 ssh_runner.go:195] Run: which crictl
	I1024 19:36:22.545856  672747 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I1024 19:36:22.545916  672747 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I1024 19:36:22.545974  672747 ssh_runner.go:195] Run: which crictl
	I1024 19:36:22.649234  672747 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1024 19:36:22.649307  672747 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I1024 19:36:22.649354  672747 ssh_runner.go:195] Run: which crictl
	I1024 19:36:22.682826  672747 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I1024 19:36:22.682951  672747 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I1024 19:36:22.683133  672747 ssh_runner.go:195] Run: which crictl
	I1024 19:36:22.683264  672747 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I1024 19:36:22.683308  672747 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 19:36:22.683343  672747 ssh_runner.go:195] Run: which crictl
	I1024 19:36:22.747564  672747 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I1024 19:36:22.747621  672747 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I1024 19:36:22.747672  672747 ssh_runner.go:195] Run: which crictl
	I1024 19:36:22.747704  672747 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I1024 19:36:22.747889  672747 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I1024 19:36:22.747934  672747 ssh_runner.go:195] Run: which crictl
	I1024 19:36:22.803948  672747 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I1024 19:36:22.804131  672747 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I1024 19:36:22.804221  672747 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I1024 19:36:22.804302  672747 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I1024 19:36:22.804416  672747 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I1024 19:36:22.804557  672747 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I1024 19:36:22.804702  672747 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I1024 19:36:22.967001  672747 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I1024 19:36:22.967102  672747 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I1024 19:36:22.967157  672747 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1024 19:36:22.967209  672747 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I1024 19:36:22.967264  672747 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I1024 19:36:22.967325  672747 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I1024 19:36:22.974381  672747 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I1024 19:36:22.974441  672747 cache_images.go:92] LoadImages completed in 730.83841ms
	W1024 19:36:22.974556  672747 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-471553/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I1024 19:36:22.974638  672747 ssh_runner.go:195] Run: crio config
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.707457917Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fd56700c08f9da51e26420aac125e3acea2720a275ed844b01d1035330976280/merged/etc/group: no such file or directory"
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.864397991Z" level=info msg="Created container 04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7: kube-system/kube-proxy-6r7cb/kube-proxy" id=5189832f-26a8-49bf-93b4-8711b20b0243 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.865044349Z" level=info msg="Starting container: 04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7" id=ab37b987-6ecd-409b-b67a-2200fa073b34 name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.865838264Z" level=info msg="Created container 5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267: kube-system/kindnet-j6kq7/kindnet-cni" id=5ebc2928-09c0-4bf9-9cd9-67c71e6007ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.866310467Z" level=info msg="Starting container: 5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267" id=8feffc93-90e6-48b9-9cf7-c5038d93410e name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.877807782Z" level=info msg="Started container" PID=4129 containerID=5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267 description=kube-system/kindnet-j6kq7/kindnet-cni id=8feffc93-90e6-48b9-9cf7-c5038d93410e name=/runtime.v1.RuntimeService/StartContainer sandboxID=25d0797df60c88cf4246a02717c363a0cc375e9632f013bf3cb154625ffc7779
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.878273294Z" level=info msg="Started container" PID=4122 containerID=04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7 description=kube-system/kube-proxy-6r7cb/kube-proxy id=ab37b987-6ecd-409b-b67a-2200fa073b34 name=/runtime.v1.RuntimeService/StartContainer sandboxID=61ed8c57dfc2c7a865764231e323b7f3f9202e7f93ee33e69263f7088faae46d
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.885072309Z" level=info msg="Created container d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff: kube-system/coredns-5dd5756b68-9m8kb/coredns" id=605298c7-1e8c-465d-a0f2-a9c730055700 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.941740348Z" level=info msg="Starting container: d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff" id=218944f5-d115-4981-9b0c-cdd19fa3f10c name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 19:36:00 pause-639553 crio[3046]: time="2023-10-24 19:36:00.956221634Z" level=info msg="Started container" PID=4131 containerID=d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff description=kube-system/coredns-5dd5756b68-9m8kb/coredns id=218944f5-d115-4981-9b0c-cdd19fa3f10c name=/runtime.v1.RuntimeService/StartContainer sandboxID=f36905aacb6af86e58836da5e018f03880704c74b13509824f71198785e645ff
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.449586374Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.460356606Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.460390063Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.460410386Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.466817331Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.466849696Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.466872074Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.479363669Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.479398814Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.541436620Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.549978411Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.550016911Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.550043273Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.555379224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 24 19:36:01 pause-639553 crio[3046]: time="2023-10-24 19:36:01.555417841Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d98c3aa91f29b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   24 seconds ago      Running             coredns                   2                   f36905aacb6af       coredns-5dd5756b68-9m8kb
	04179cb9f5a79       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   24 seconds ago      Running             kube-proxy                2                   61ed8c57dfc2c       kube-proxy-6r7cb
	5079118168cb9       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   24 seconds ago      Running             kindnet-cni               2                   25d0797df60c8       kindnet-j6kq7
	a9a0d3327ecdf       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   28 seconds ago      Running             kube-apiserver            2                   5a2daf0b2d617       kube-apiserver-pause-639553
	8f362f68992d1       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   28 seconds ago      Running             kube-scheduler            3                   51f2c7b0b16b5       kube-scheduler-pause-639553
	c1f1d7c3a38d9       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   28 seconds ago      Running             kube-controller-manager   3                   820612a14de06       kube-controller-manager-pause-639553
	4f763e6a35b2c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   28 seconds ago      Running             etcd                      3                   648d9baaf7da6       etcd-pause-639553
	7556dd7e77654       10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3   33 seconds ago      Exited              kube-controller-manager   2                   820612a14de06       kube-controller-manager-pause-639553
	f02c9006c5461       6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4   39 seconds ago      Exited              kube-scheduler            2                   51f2c7b0b16b5       kube-scheduler-pause-639553
	e616aa8f6da1b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   40 seconds ago      Exited              etcd                      2                   648d9baaf7da6       etcd-pause-639553
	3e39e61ed3be1       53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076   53 seconds ago      Exited              kube-apiserver            1                   5a2daf0b2d617       kube-apiserver-pause-639553
	72fd13232bea6       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   53 seconds ago      Exited              kindnet-cni               1                   25d0797df60c8       kindnet-j6kq7
	d6ca43cfddca0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   53 seconds ago      Exited              coredns                   1                   f36905aacb6af       coredns-5dd5756b68-9m8kb
	2a119c4fecb6a       bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf   53 seconds ago      Exited              kube-proxy                1                   61ed8c57dfc2c       kube-proxy-6r7cb
	
	* 
	* ==> coredns [d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39517 - 52058 "HINFO IN 8041968620250387946.5613235457838200228. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030855643s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [d98c3aa91f29b958d05e6adf699951d07a5d209d5d155e7e26cfbbb5201ad3ff] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44919 - 17757 "HINFO IN 6207427719502831205.5883949342991463497. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029389787s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-639553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-639553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=pause-639553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_35_08_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:35:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-639553
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:36:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:36:00 +0000   Tue, 24 Oct 2023 19:35:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-639553
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859420Ki
	  pods:               110
	System Info:
	  Machine ID:                 36cc75f624ac4a89b6bdc2afc3b63fb5
	  System UUID:                6d3cac5e-0436-491c-b68f-ac2b4782dfce
	  Boot ID:                    f78507ce-bb13-4a64-bee1-5d653b27f216
	  Kernel Version:             5.15.0-1045-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-9m8kb                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     65s
	  kube-system                 etcd-pause-639553                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kindnet-j6kq7                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      65s
	  kube-system                 kube-apiserver-pause-639553             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-pause-639553    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-6r7cb                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-pause-639553             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  85s (x8 over 85s)  kubelet          Node pause-639553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 85s)  kubelet          Node pause-639553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x8 over 85s)  kubelet          Node pause-639553 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-639553 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-639553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-639553 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           66s                node-controller  Node pause-639553 event: Registered Node pause-639553 in Controller
	  Normal  NodeReady                64s                kubelet          Node pause-639553 status is now: NodeReady
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node pause-639553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x8 over 29s)  kubelet          Node pause-639553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x8 over 29s)  kubelet          Node pause-639553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node pause-639553 event: Registered Node pause-639553 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +4.223578] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000007] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +8.191215] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000006] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[Oct24 19:25] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000008] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +1.010575] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000006] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +2.015766] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000007] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +4.223661] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000008] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[  +8.191175] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-fee0293b013f
	[  +0.000009] ll header: 00000000: 02 42 6b 29 92 51 02 42 c0 a8 3a 02 08 00
	[Oct24 19:28] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000012] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +1.025209] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000005] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +2.011840] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000036] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +4.067487] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000007] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	[  +8.191280] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-a973e64b617b
	[  +0.000007] ll header: 00000000: 02 42 b6 dd 7b d0 02 42 c0 a8 43 02 08 00
	
	* 
	* ==> etcd [4f763e6a35b2c086c5f7cc903f23b8afbfdf5b36caa0cbcfdc6405ca616c7028] <==
	* {"level":"info","ts":"2023-10-24T19:35:57.445295Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:35:57.450762Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T19:35:57.451099Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T19:35:57.451171Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:35:57.451331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:57.451352Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:58.386827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:58.386896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:58.386929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:58.386946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.386955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.386966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.386984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2023-10-24T19:35:58.389505Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-639553 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:35:58.389522Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:58.389699Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:58.389859Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:58.389917Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:58.391361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-24T19:35:58.391541Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:36:09.341635Z","caller":"traceutil/trace.go:171","msg":"trace[2059745744] linearizableReadLoop","detail":"{readStateIndex:521; appliedIndex:520; }","duration":"102.348208ms","start":"2023-10-24T19:36:09.239266Z","end":"2023-10-24T19:36:09.341615Z","steps":["trace[2059745744] 'read index received'  (duration: 38.823201ms)","trace[2059745744] 'applied index is now lower than readState.Index'  (duration: 63.522093ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-24T19:36:09.341873Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.610738ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-639553\" ","response":"range_response_count:1 size:5458"}
	{"level":"info","ts":"2023-10-24T19:36:09.34199Z","caller":"traceutil/trace.go:171","msg":"trace[1786836361] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-639553; range_end:; response_count:1; response_revision:485; }","duration":"102.743996ms","start":"2023-10-24T19:36:09.239235Z","end":"2023-10-24T19:36:09.341979Z","steps":["trace[1786836361] 'agreement among raft nodes before linearized reading'  (duration: 102.551213ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:36:09.34198Z","caller":"traceutil/trace.go:171","msg":"trace[2131638994] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"154.880532ms","start":"2023-10-24T19:36:09.187066Z","end":"2023-10-24T19:36:09.341947Z","steps":["trace[2131638994] 'process raft request'  (duration: 90.907764ms)","trace[2131638994] 'compare'  (duration: 63.535445ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-24T19:36:21.740415Z","caller":"traceutil/trace.go:171","msg":"trace[1774707635] transaction","detail":"{read_only:false; response_revision:499; number_of_response:1; }","duration":"103.904947ms","start":"2023-10-24T19:36:21.636485Z","end":"2023-10-24T19:36:21.74039Z","steps":["trace[1774707635] 'process raft request'  (duration: 68.914655ms)","trace[1774707635] 'compare'  (duration: 34.852859ms)"],"step_count":2}
	
	* 
	* ==> etcd [e616aa8f6da1b319d518f5a6de368ac08f1e1a4e9122121d273a6594f58b381a] <==
	* {"level":"info","ts":"2023-10-24T19:35:45.701276Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:35:47.090811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:47.090916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:47.090934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-24T19:35:47.090951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.090958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.090968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.090977Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-24T19:35:47.092986Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-639553 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:35:47.092994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:47.093045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:35:47.093216Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:47.093244Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:35:47.094379Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:35:47.094601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-24T19:35:54.325976Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-24T19:35:54.326029Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-639553","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-24T19:35:54.326132Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:35:54.326155Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:35:54.327601Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:35:54.327647Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-24T19:35:54.327696Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-24T19:35:54.330322Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:54.330466Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-24T19:35:54.330485Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-639553","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  19:36:25 up  3:18,  0 users,  load average: 5.60, 3.68, 2.18
	Linux pause-639553 5.15.0-1045-gcp #53~20.04.2-Ubuntu SMP Wed Oct 18 12:59:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5079118168cb96f69ab91f52e72fdda427400409a02a48cf0eed2db3a768c267] <==
	* I1024 19:36:01.043676       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1024 19:36:01.044601       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1024 19:36:01.045030       1 main.go:116] setting mtu 1500 for CNI 
	I1024 19:36:01.045120       1 main.go:146] kindnetd IP family: "ipv4"
	I1024 19:36:01.045185       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1024 19:36:01.449153       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1024 19:36:01.449193       1 main.go:227] handling current node
	I1024 19:36:11.555873       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1024 19:36:11.555905       1 main.go:227] handling current node
	I1024 19:36:21.569412       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1024 19:36:21.569458       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917] <==
	* I1024 19:35:32.555642       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1024 19:35:32.555903       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1024 19:35:32.641429       1 main.go:116] setting mtu 1500 for CNI 
	I1024 19:35:32.641719       1 main.go:146] kindnetd IP family: "ipv4"
	I1024 19:35:32.641810       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1024 19:35:33.041444       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [3e39e61ed3be1fc99205ee24402d4f66c8053d8e6fa22ffa827587ef43f37eb1] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 19:35:37.559820       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 19:35:37.562242       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1024 19:35:37.578331       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [a9a0d3327ecdf7dd07b74e853bdad7048539ccae49bfb91a8f30e092b882e4b4] <==
	* I1024 19:36:00.209330       1 controller.go:85] Starting OpenAPI V3 controller
	I1024 19:36:00.209372       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1024 19:36:00.209383       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1024 19:36:00.209739       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1024 19:36:00.209718       1 controller.go:78] Starting OpenAPI AggregationController
	I1024 19:36:00.341056       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 19:36:00.341277       1 aggregator.go:166] initial CRD sync complete...
	I1024 19:36:00.341326       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 19:36:00.341366       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 19:36:00.343150       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 19:36:00.358842       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:36:00.365424       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1024 19:36:00.443246       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:36:00.443482       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:36:00.443676       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:36:00.443774       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 19:36:00.443741       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1024 19:36:00.444716       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1024 19:36:00.464476       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1024 19:36:01.214519       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 19:36:02.396143       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1024 19:36:02.554437       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1024 19:36:02.567218       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1024 19:36:02.664072       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:36:02.675292       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [7556dd7e77654b82a85d9084c0ecdd4d2247163f51098b477845e37c6b4832b7] <==
	* I1024 19:35:52.629191       1 serving.go:348] Generated self-signed cert in-memory
	I1024 19:35:52.888182       1 controllermanager.go:189] "Starting" version="v1.28.3"
	I1024 19:35:52.888217       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:35:52.889560       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1024 19:35:52.889639       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1024 19:35:52.890346       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1024 19:35:52.890606       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [c1f1d7c3a38d9cb392fc7bd632bb227616cf1a0dd698730a50985bca0b466ce1] <==
	* I1024 19:36:12.603073       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1024 19:36:12.605274       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1024 19:36:12.605326       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1024 19:36:12.607987       1 shared_informer.go:318] Caches are synced for GC
	I1024 19:36:12.610025       1 shared_informer.go:318] Caches are synced for crt configmap
	I1024 19:36:12.612367       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1024 19:36:12.612765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="216.795µs"
	I1024 19:36:12.614835       1 shared_informer.go:318] Caches are synced for cronjob
	I1024 19:36:12.619189       1 shared_informer.go:318] Caches are synced for endpoint
	I1024 19:36:12.619306       1 shared_informer.go:318] Caches are synced for disruption
	I1024 19:36:12.626860       1 shared_informer.go:318] Caches are synced for stateful set
	I1024 19:36:12.664183       1 shared_informer.go:318] Caches are synced for attach detach
	I1024 19:36:12.711746       1 shared_informer.go:318] Caches are synced for daemon sets
	I1024 19:36:12.739943       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:36:12.797325       1 shared_informer.go:318] Caches are synced for taint
	I1024 19:36:12.797407       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1024 19:36:12.797514       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1024 19:36:12.797548       1 taint_manager.go:211] "Sending events to api server"
	I1024 19:36:12.797674       1 event.go:307] "Event occurred" object="pause-639553" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-639553 event: Registered Node pause-639553 in Controller"
	I1024 19:36:12.797689       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-639553"
	I1024 19:36:12.797876       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1024 19:36:12.805145       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:36:13.133472       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:36:13.191098       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:36:13.191136       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [04179cb9f5a797892275171f01d7d63cfe4b304a7d570d97224e577db3bcebf7] <==
	* I1024 19:36:00.990470       1 server_others.go:69] "Using iptables proxy"
	I1024 19:36:01.050703       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1024 19:36:01.106319       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:36:01.109211       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:36:01.109298       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:36:01.109308       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:36:01.109337       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:36:01.109547       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:36:01.109789       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:36:01.110445       1 config.go:188] "Starting service config controller"
	I1024 19:36:01.110474       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:36:01.110510       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:36:01.110513       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:36:01.111030       1 config.go:315] "Starting node config controller"
	I1024 19:36:01.116227       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:36:01.116261       1 shared_informer.go:318] Caches are synced for node config
	I1024 19:36:01.211550       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:36:01.211551       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda] <==
	* I1024 19:35:32.678468       1 server_others.go:69] "Using iptables proxy"
	E1024 19:35:32.742039       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-639553": dial tcp 192.168.67.2:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [8f362f68992d1d720b793dddcaf2439b2749b610398c9a8f56c9b870d75a37fd] <==
	* I1024 19:35:58.142049       1 serving.go:348] Generated self-signed cert in-memory
	W1024 19:36:00.261709       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 19:36:00.261824       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:36:00.261895       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 19:36:00.261929       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 19:36:00.351836       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 19:36:00.351975       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:36:00.354854       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:36:00.354964       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:36:00.355449       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 19:36:00.355599       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:36:00.459022       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f02c9006c5461fdb26a7158b616cd24749daedea6b0c4d0066c0016c947d9fe6] <==
	* W1024 19:35:50.589133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.589215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.67.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.718403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.718470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.67.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.810982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.811031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.906726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.906777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:50.994911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:50.994974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:51.104285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:51.104331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.67.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:51.187596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:51.187635       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.67.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:52.796768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:52.796983       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.67.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:53.444707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:53.444877       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.67.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:53.631100       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:53.631145       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1024 19:35:53.827675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:53.827757       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1024 19:35:54.174228       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E1024 19:35:54.174456       1 run.go:74] "command failed" err="finished without leader elect"
	E1024 19:35:54.174486       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 24 19:35:57 pause-639553 kubelet[3790]: E1024 19:35:57.180765    3790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: W1024 19:35:57.292292    3790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: E1024 19:35:57.292396    3790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: W1024 19:35:57.341774    3790 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: E1024 19:35:57.341909    3790 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	Oct 24 19:35:57 pause-639553 kubelet[3790]: I1024 19:35:57.848523    3790 kubelet_node_status.go:70] "Attempting to register node" node="pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.344678    3790 apiserver.go:52] "Watching apiserver"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.357771    3790 topology_manager.go:215] "Topology Admit Handler" podUID="f30348b5-115d-4161-a406-07b8e208de06" podNamespace="kube-system" podName="kube-proxy-6r7cb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.357931    3790 topology_manager.go:215] "Topology Admit Handler" podUID="1a8dcb9c-e2b8-4dd7-b78a-0d6df030fef3" podNamespace="kube-system" podName="coredns-5dd5756b68-9m8kb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.358017    3790 topology_manager.go:215] "Topology Admit Handler" podUID="efda4578-700d-40de-a3f9-060bebdfddc6" podNamespace="kube-system" podName="kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.441445    3790 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445698    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/efda4578-700d-40de-a3f9-060bebdfddc6-cni-cfg\") pod \"kindnet-j6kq7\" (UID: \"efda4578-700d-40de-a3f9-060bebdfddc6\") " pod="kube-system/kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445764    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efda4578-700d-40de-a3f9-060bebdfddc6-xtables-lock\") pod \"kindnet-j6kq7\" (UID: \"efda4578-700d-40de-a3f9-060bebdfddc6\") " pod="kube-system/kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445797    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efda4578-700d-40de-a3f9-060bebdfddc6-lib-modules\") pod \"kindnet-j6kq7\" (UID: \"efda4578-700d-40de-a3f9-060bebdfddc6\") " pod="kube-system/kindnet-j6kq7"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445853    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f30348b5-115d-4161-a406-07b8e208de06-xtables-lock\") pod \"kube-proxy-6r7cb\" (UID: \"f30348b5-115d-4161-a406-07b8e208de06\") " pod="kube-system/kube-proxy-6r7cb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.445884    3790 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f30348b5-115d-4161-a406-07b8e208de06-lib-modules\") pod \"kube-proxy-6r7cb\" (UID: \"f30348b5-115d-4161-a406-07b8e208de06\") " pod="kube-system/kube-proxy-6r7cb"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.465388    3790 kubelet_node_status.go:108] "Node was previously registered" node="pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.465549    3790 kubelet_node_status.go:73] "Successfully registered node" node="pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.467724    3790 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.469559    3790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: E1024 19:36:00.545191    3790 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-639553\" already exists" pod="kube-system/kube-apiserver-pause-639553"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.661002    3790 scope.go:117] "RemoveContainer" containerID="72fd13232bea69fd0cb95f20f053d4b2398ee9c1b6ec504dd14610f946429917"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.662576    3790 scope.go:117] "RemoveContainer" containerID="2a119c4fecb6a0750f31a5417017b702a2ac0ef9b501837c0330933732ddbeda"
	Oct 24 19:36:00 pause-639553 kubelet[3790]: I1024 19:36:00.662784    3790 scope.go:117] "RemoveContainer" containerID="d6ca43cfddca0db6aad8e2281063a96de7b4351414f3ac42e0c4714aa6abb311"
	Oct 24 19:36:07 pause-639553 kubelet[3790]: I1024 19:36:07.898919    3790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-639553 -n pause-639553
helpers_test.go:261: (dbg) Run:  kubectl --context pause-639553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (62.32s)

                                                
                                    

Test pass (272/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.33
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.3/json-events 5.26
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.26
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
18 TestDownloadOnlyKic 1.43
19 TestBinaryMirror 0.82
20 TestOffline 90.57
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
25 TestAddons/Setup 145.94
27 TestAddons/parallel/Registry 14.14
29 TestAddons/parallel/InspektorGadget 10.85
30 TestAddons/parallel/MetricsServer 5.82
31 TestAddons/parallel/HelmTiller 11.44
33 TestAddons/parallel/CSI 75.9
34 TestAddons/parallel/Headlamp 14.02
35 TestAddons/parallel/CloudSpanner 5.65
36 TestAddons/parallel/LocalPath 56.26
37 TestAddons/parallel/NvidiaDevicePlugin 5.48
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/StoppedEnableDisable 12.4
42 TestCertOptions 31.09
43 TestCertExpiration 247.71
45 TestForceSystemdFlag 33.96
46 TestForceSystemdEnv 41.87
48 TestKVMDriverInstallOrUpdate 3.16
52 TestErrorSpam/setup 26.69
53 TestErrorSpam/start 0.75
54 TestErrorSpam/status 1.03
55 TestErrorSpam/pause 1.71
56 TestErrorSpam/unpause 1.7
57 TestErrorSpam/stop 1.48
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 46.61
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 41.89
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
69 TestFunctional/serial/CacheCmd/cache/add_local 1.37
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
74 TestFunctional/serial/CacheCmd/cache/delete 0.16
75 TestFunctional/serial/MinikubeKubectlCmd 0.14
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
77 TestFunctional/serial/ExtraConfig 31.39
78 TestFunctional/serial/ComponentHealth 0.08
79 TestFunctional/serial/LogsCmd 1.65
80 TestFunctional/serial/LogsFileCmd 1.62
81 TestFunctional/serial/InvalidService 4.72
83 TestFunctional/parallel/ConfigCmd 0.63
84 TestFunctional/parallel/DashboardCmd 20.93
85 TestFunctional/parallel/DryRun 0.54
86 TestFunctional/parallel/InternationalLanguage 0.22
87 TestFunctional/parallel/StatusCmd 1.12
91 TestFunctional/parallel/ServiceCmdConnect 7.66
92 TestFunctional/parallel/AddonsCmd 0.19
93 TestFunctional/parallel/PersistentVolumeClaim 27.19
95 TestFunctional/parallel/SSHCmd 0.93
96 TestFunctional/parallel/CpCmd 1.57
97 TestFunctional/parallel/MySQL 25.01
98 TestFunctional/parallel/FileSync 0.32
99 TestFunctional/parallel/CertSync 2.07
103 TestFunctional/parallel/NodeLabels 0.06
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
107 TestFunctional/parallel/License 0.25
108 TestFunctional/parallel/Version/short 0.11
109 TestFunctional/parallel/Version/components 1.49
110 TestFunctional/parallel/ServiceCmd/DeployApp 11.35
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.53
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.51
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.59
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.53
117 TestFunctional/parallel/ImageCommands/ImageBuild 8.07
118 TestFunctional/parallel/ImageCommands/Setup 1.06
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.45
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.81
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.15
124 TestFunctional/parallel/ServiceCmd/List 0.44
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
127 TestFunctional/parallel/ServiceCmd/Format 0.45
128 TestFunctional/parallel/ServiceCmd/URL 0.41
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.63
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
131 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
137 TestFunctional/parallel/ProfileCmd/profile_list 0.43
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
139 TestFunctional/parallel/MountCmd/any-port 7.62
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.02
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.51
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.03
147 TestFunctional/parallel/MountCmd/specific-port 2.55
148 TestFunctional/parallel/MountCmd/VerifyCleanup 2.87
149 TestFunctional/delete_addon-resizer_images 0.09
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 75.28
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.48
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
162 TestJSONOutput/start/Command 69.38
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.76
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.71
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 6.01
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.26
187 TestKicCustomNetwork/create_custom_network 35.85
188 TestKicCustomNetwork/use_default_bridge_network 27.26
189 TestKicExistingNetwork 28.43
190 TestKicCustomSubnet 27.12
191 TestKicStaticIP 30.18
192 TestMainNoArgs 0.08
193 TestMinikubeProfile 57.95
196 TestMountStart/serial/StartWithMountFirst 6.2
197 TestMountStart/serial/VerifyMountFirst 0.29
198 TestMountStart/serial/StartWithMountSecond 5.67
199 TestMountStart/serial/VerifyMountSecond 0.3
200 TestMountStart/serial/DeleteFirst 1.73
201 TestMountStart/serial/VerifyMountPostDelete 0.3
202 TestMountStart/serial/Stop 1.24
203 TestMountStart/serial/RestartStopped 7.23
204 TestMountStart/serial/VerifyMountPostStop 0.3
207 TestMultiNode/serial/FreshStart2Nodes 70.37
208 TestMultiNode/serial/DeployApp2Nodes 4.52
210 TestMultiNode/serial/AddNode 17.87
211 TestMultiNode/serial/ProfileList 0.32
212 TestMultiNode/serial/CopyFile 10.81
213 TestMultiNode/serial/StopNode 2.34
214 TestMultiNode/serial/StartAfterStop 11.4
215 TestMultiNode/serial/RestartKeepsNodes 114.43
216 TestMultiNode/serial/DeleteNode 4.94
217 TestMultiNode/serial/StopMultiNode 24.18
218 TestMultiNode/serial/RestartMultiNode 80.44
219 TestMultiNode/serial/ValidateNameConflict 28.87
224 TestPreload 152.98
226 TestScheduledStopUnix 102.98
229 TestInsufficientStorage 14.15
232 TestKubernetesUpgrade 353.88
233 TestMissingContainerUpgrade 158.18
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
236 TestNoKubernetes/serial/StartWithK8s 39.23
237 TestNoKubernetes/serial/StartWithStopK8s 8.13
238 TestNoKubernetes/serial/Start 10.57
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
240 TestNoKubernetes/serial/ProfileList 1.39
241 TestNoKubernetes/serial/Stop 1.29
242 TestNoKubernetes/serial/StartNoArgs 7.85
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
244 TestStoppedBinaryUpgrade/Setup 0.39
246 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
255 TestPause/serial/Start 40.57
264 TestNetworkPlugins/group/false 4.99
269 TestStartStop/group/old-k8s-version/serial/FirstStart 125.21
271 TestStartStop/group/no-preload/serial/FirstStart 64.13
273 TestStartStop/group/embed-certs/serial/FirstStart 49.01
274 TestStartStop/group/embed-certs/serial/DeployApp 8.38
275 TestStartStop/group/no-preload/serial/DeployApp 7.47
276 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
277 TestStartStop/group/embed-certs/serial/Stop 12.1
278 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
279 TestStartStop/group/no-preload/serial/Stop 12.14
280 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
281 TestStartStop/group/embed-certs/serial/SecondStart 338.19
282 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.3
283 TestStartStop/group/no-preload/serial/SecondStart 341.23
284 TestStartStop/group/old-k8s-version/serial/DeployApp 8.55
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
286 TestStartStop/group/old-k8s-version/serial/Stop 12.19
288 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.87
289 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
290 TestStartStop/group/old-k8s-version/serial/SecondStart 422.55
291 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.42
292 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
293 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
295 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 344.34
296 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.03
297 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.08
298 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
300 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
301 TestStartStop/group/embed-certs/serial/Pause 3.17
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
303 TestStartStop/group/no-preload/serial/Pause 3.66
305 TestStartStop/group/newest-cni/serial/FirstStart 39.02
306 TestNetworkPlugins/group/auto/Start 47.63
307 TestStartStop/group/newest-cni/serial/DeployApp 0
308 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
309 TestStartStop/group/newest-cni/serial/Stop 1.37
310 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
311 TestStartStop/group/newest-cni/serial/SecondStart 29
312 TestNetworkPlugins/group/auto/KubeletFlags 0.34
313 TestNetworkPlugins/group/auto/NetCatPod 10.36
314 TestNetworkPlugins/group/auto/DNS 0.2
315 TestNetworkPlugins/group/auto/Localhost 0.17
316 TestNetworkPlugins/group/auto/HairPin 0.17
317 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
320 TestStartStop/group/newest-cni/serial/Pause 3.37
321 TestNetworkPlugins/group/kindnet/Start 75.45
322 TestNetworkPlugins/group/calico/Start 72.85
323 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.02
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
325 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
326 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
327 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.4
328 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
329 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
330 TestStartStop/group/old-k8s-version/serial/Pause 3.89
331 TestNetworkPlugins/group/custom-flannel/Start 62.15
332 TestNetworkPlugins/group/enable-default-cni/Start 42.99
333 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
334 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
335 TestNetworkPlugins/group/calico/ControllerPod 5.03
336 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
337 TestNetworkPlugins/group/calico/KubeletFlags 0.32
338 TestNetworkPlugins/group/calico/NetCatPod 11.34
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.37
341 TestNetworkPlugins/group/kindnet/DNS 0.18
342 TestNetworkPlugins/group/kindnet/Localhost 0.18
343 TestNetworkPlugins/group/kindnet/HairPin 0.21
344 TestNetworkPlugins/group/calico/DNS 0.26
345 TestNetworkPlugins/group/calico/Localhost 0.19
346 TestNetworkPlugins/group/calico/HairPin 0.18
347 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
348 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
349 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.34
352 TestNetworkPlugins/group/custom-flannel/DNS 0.24
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.24
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
355 TestNetworkPlugins/group/flannel/Start 66.01
356 TestNetworkPlugins/group/bridge/Start 41.74
357 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
358 TestNetworkPlugins/group/bridge/NetCatPod 10.28
359 TestNetworkPlugins/group/bridge/DNS 32.5
360 TestNetworkPlugins/group/flannel/ControllerPod 5.02
361 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
362 TestNetworkPlugins/group/flannel/NetCatPod 10.29
363 TestNetworkPlugins/group/flannel/DNS 0.18
364 TestNetworkPlugins/group/flannel/Localhost 0.16
365 TestNetworkPlugins/group/flannel/HairPin 0.17
366 TestNetworkPlugins/group/bridge/Localhost 0.17
367 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (7.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-712524 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-712524 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.326753026s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-712524
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-712524: exit status 85 (100.587706ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-712524 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-712524        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:37.281353  478335 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:37.281517  478335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:37.281526  478335 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:37.281531  478335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:37.281722  478335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	W1024 19:00:37.281850  478335 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-471553/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-471553/.minikube/config/config.json: no such file or directory
	I1024 19:00:37.282523  478335 out.go:303] Setting JSON to true
	I1024 19:00:37.283729  478335 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9785,"bootTime":1698164253,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:37.283806  478335 start.go:138] virtualization: kvm guest
	I1024 19:00:37.287257  478335 out.go:97] [download-only-712524] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:00:37.289590  478335 out.go:169] MINIKUBE_LOCATION=17485
	W1024 19:00:37.287462  478335 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball: no such file or directory
	I1024 19:00:37.287543  478335 notify.go:220] Checking for updates...
	I1024 19:00:37.293942  478335 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:37.296335  478335 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:00:37.298588  478335 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:00:37.300527  478335 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1024 19:00:37.304336  478335 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 19:00:37.304638  478335 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:00:37.333139  478335 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:00:37.333243  478335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:00:37.395596  478335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-10-24 19:00:37.384840844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:00:37.395717  478335 docker.go:295] overlay module found
	I1024 19:00:37.397954  478335 out.go:97] Using the docker driver based on user configuration
	I1024 19:00:37.397987  478335 start.go:298] selected driver: docker
	I1024 19:00:37.397999  478335 start.go:902] validating driver "docker" against <nil>
	I1024 19:00:37.398097  478335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:00:37.461361  478335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-24 19:00:37.450554516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:00:37.461604  478335 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:00:37.462132  478335 start_flags.go:386] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1024 19:00:37.462322  478335 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1024 19:00:37.465398  478335 out.go:169] Using Docker driver with root privileges
	I1024 19:00:37.468086  478335 cni.go:84] Creating CNI manager for ""
	I1024 19:00:37.468121  478335 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:00:37.468134  478335 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:00:37.468149  478335 start_flags.go:323] config:
	{Name:download-only-712524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-712524 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:00:37.470479  478335 out.go:97] Starting control plane node download-only-712524 in cluster download-only-712524
	I1024 19:00:37.470516  478335 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:00:37.472539  478335 out.go:97] Pulling base image ...
	I1024 19:00:37.472588  478335 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:00:37.472708  478335 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:00:37.490677  478335 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:00:37.490883  478335 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1024 19:00:37.490968  478335 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:00:37.504170  478335 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:37.504203  478335 cache.go:57] Caching tarball of preloaded images
	I1024 19:00:37.504346  478335 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:00:37.506878  478335 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1024 19:00:37.506913  478335 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:00:37.546421  478335 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:40.957536  478335 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1024 19:00:41.456622  478335 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:00:41.456722  478335 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-712524"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (5.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-712524 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-712524 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.259829714s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (5.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-712524
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-712524: exit status 85 (85.959745ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-712524 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-712524        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-712524 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-712524        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:44.715240  478478 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:44.715596  478478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:44.715613  478478 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:44.715620  478478 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:44.715919  478478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	W1024 19:00:44.716108  478478 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-471553/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-471553/.minikube/config/config.json: no such file or directory
	I1024 19:00:44.716677  478478 out.go:303] Setting JSON to true
	I1024 19:00:44.717771  478478 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9792,"bootTime":1698164253,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:44.717849  478478 start.go:138] virtualization: kvm guest
	I1024 19:00:44.720519  478478 out.go:97] [download-only-712524] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:00:44.722643  478478 out.go:169] MINIKUBE_LOCATION=17485
	I1024 19:00:44.720818  478478 notify.go:220] Checking for updates...
	I1024 19:00:44.726748  478478 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:44.728694  478478 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:00:44.730519  478478 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:00:44.732179  478478 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1024 19:00:44.735131  478478 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 19:00:44.735673  478478 config.go:182] Loaded profile config "download-only-712524": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1024 19:00:44.735735  478478 start.go:810] api.Load failed for download-only-712524: filestore "download-only-712524": Docker machine "download-only-712524" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 19:00:44.735821  478478 driver.go:378] Setting default libvirt URI to qemu:///system
	W1024 19:00:44.735851  478478 start.go:810] api.Load failed for download-only-712524: filestore "download-only-712524": Docker machine "download-only-712524" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 19:00:44.759241  478478 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:00:44.759464  478478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:00:44.814738  478478 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-24 19:00:44.805484012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:00:44.814882  478478 docker.go:295] overlay module found
	I1024 19:00:44.817021  478478 out.go:97] Using the docker driver based on existing profile
	I1024 19:00:44.817056  478478 start.go:298] selected driver: docker
	I1024 19:00:44.817064  478478 start.go:902] validating driver "docker" against &{Name:download-only-712524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-712524 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:00:44.817364  478478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:00:44.883261  478478 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-24 19:00:44.870967786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:00:44.884136  478478 cni.go:84] Creating CNI manager for ""
	I1024 19:00:44.884166  478478 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:00:44.884180  478478 start_flags.go:323] config:
	{Name:download-only-712524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-712524 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1024 19:00:44.887349  478478 out.go:97] Starting control plane node download-only-712524 in cluster download-only-712524
	I1024 19:00:44.887393  478478 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:00:44.889386  478478 out.go:97] Pulling base image ...
	I1024 19:00:44.889438  478478 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:00:44.889557  478478 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:00:44.910401  478478 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:00:44.910543  478478 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1024 19:00:44.910563  478478 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1024 19:00:44.910568  478478 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1024 19:00:44.910589  478478 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1024 19:00:44.927482  478478 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:44.927530  478478 cache.go:57] Caching tarball of preloaded images
	I1024 19:00:44.927688  478478 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:00:44.930216  478478 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1024 19:00:44.930282  478478 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:00:44.964509  478478 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1024 19:00:48.332476  478478 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:00:48.332597  478478 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17485-471553/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1024 19:00:49.288827  478478 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:00:49.288955  478478 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/download-only-712524/config.json ...
	I1024 19:00:49.289178  478478 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:00:49.289423  478478 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17485-471553/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-712524"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.26s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-712524
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-108940 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-108940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-108940
--- PASS: TestDownloadOnlyKic (1.43s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-298100 --alsologtostderr --binary-mirror http://127.0.0.1:41549 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-298100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-298100
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (90.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-313980 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-313980 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m24.050223636s)
helpers_test.go:175: Cleaning up "offline-crio-313980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-313980
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-313980: (6.51895626s)
--- PASS: TestOffline (90.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-291433
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-291433: exit status 85 (74.957412ms)

                                                
                                                
-- stdout --
	* Profile "addons-291433" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-291433"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-291433
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-291433: exit status 85 (75.265789ms)

                                                
                                                
-- stdout --
	* Profile "addons-291433" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-291433"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (145.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-291433 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-291433 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.943514929s)
--- PASS: TestAddons/Setup (145.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 11.356212ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-t6vg5" [9e38b9b1-7def-4c43-a353-22ddf6cbe203] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015013803s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bgxs6" [00eacd34-1a93-4ccc-85e2-7605a5e16b4e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0163604s
addons_test.go:339: (dbg) Run:  kubectl --context addons-291433 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-291433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-291433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.182474965s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-znpzs" [78ea082b-96dd-450b-9c4a-6b9175ce39eb] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.050409768s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-291433
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-291433: (5.796204864s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 11.253778ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-l55zx" [73994317-9cc6-4c99-b4dd-cac48cecc00d] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015207771s
addons_test.go:414: (dbg) Run:  kubectl --context addons-291433 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.44s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.547086ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-hjbd2" [83360eb6-06d9-43a5-892f-887cc8587848] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.018254393s
addons_test.go:472: (dbg) Run:  kubectl --context addons-291433 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-291433 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.789357557s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (75.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 12.294695ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-291433 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-291433 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b9c9b5f1-832f-4219-a49f-6f1823863150] Pending
helpers_test.go:344: "task-pv-pod" [b9c9b5f1-832f-4219-a49f-6f1823863150] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b9c9b5f1-832f-4219-a49f-6f1823863150] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.013622032s
addons_test.go:583: (dbg) Run:  kubectl --context addons-291433 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-291433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-291433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-291433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-291433 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-291433 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-291433 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-291433 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cf40af15-ea71-45d6-9474-346081ae7cbb] Pending
helpers_test.go:344: "task-pv-pod-restore" [cf40af15-ea71-45d6-9474-346081ae7cbb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cf40af15-ea71-45d6-9474-346081ae7cbb] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.016746341s
addons_test.go:625: (dbg) Run:  kubectl --context addons-291433 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-291433 delete pod task-pv-pod-restore: (1.347671881s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-291433 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-291433 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-291433 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.895778904s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (75.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-291433 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-291433 --alsologtostderr -v=1: (1.96385628s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
2023/10/24 19:03:32 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:344: "headlamp-94b766c-4js4r" [9cca9c39-316c-4797-abe9-40e4e3aaef3c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-4js4r" [9cca9c39-316c-4797-abe9-40e4e3aaef3c] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.052682372s
--- PASS: TestAddons/parallel/Headlamp (14.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-q59l5" [30a956dd-83b4-44f1-ad6e-17358bcca226] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009135882s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-291433
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-291433 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-291433 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d5a25bdb-4b1e-4262-9807-487453d91510] Pending
helpers_test.go:344: "test-local-path" [d5a25bdb-4b1e-4262-9807-487453d91510] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d5a25bdb-4b1e-4262-9807-487453d91510] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d5a25bdb-4b1e-4262-9807-487453d91510] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.011367447s
addons_test.go:890: (dbg) Run:  kubectl --context addons-291433 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 ssh "cat /opt/local-path-provisioner/pvc-0e7eeee0-250a-4774-8e1e-98736e535d77_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-291433 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-291433 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-291433 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-291433 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.392231091s)
--- PASS: TestAddons/parallel/LocalPath (56.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-v72v9" [6d61b791-a0a9-4ca6-bc8b-eb4e7f63c5e4] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.017980219s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-291433
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-291433 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-291433 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-291433
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-291433: (12.040704089s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-291433
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-291433
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-291433
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestCertOptions (31.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-742303 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-742303 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.999165464s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-742303 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-742303 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-742303 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-742303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-742303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-742303: (2.283337068s)
--- PASS: TestCertOptions (31.09s)

                                                
                                    
x
+
TestCertExpiration (247.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-381520 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-381520 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.054542749s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-381520 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-381520 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.478385095s)
helpers_test.go:175: Cleaning up "cert-expiration-381520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-381520
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-381520: (2.179209017s)
--- PASS: TestCertExpiration (247.71s)

                                                
                                    
x
+
TestForceSystemdFlag (33.96s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-453049 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-453049 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.033906174s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-453049 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-453049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-453049
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-453049: (2.611902159s)
--- PASS: TestForceSystemdFlag (33.96s)

                                                
                                    
x
+
TestForceSystemdEnv (41.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-348939 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-348939 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.15433241s)
helpers_test.go:175: Cleaning up "force-systemd-env-348939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-348939
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-348939: (2.719243435s)
--- PASS: TestForceSystemdEnv (41.87s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.16s)

                                                
                                    
x
+
TestErrorSpam/setup (26.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-001090 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-001090 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-001090 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-001090 --driver=docker  --container-runtime=crio: (26.686584386s)
--- PASS: TestErrorSpam/setup (26.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 status
--- PASS: TestErrorSpam/status (1.03s)

                                                
                                    
x
+
TestErrorSpam/pause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 pause
--- PASS: TestErrorSpam/pause (1.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 stop: (1.236063882s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-001090 --log_dir /tmp/nospam-001090 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17485-471553/.minikube/files/etc/test/nested/copy/478323/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558204 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1024 19:08:18.869975  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:18.875883  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:18.886234  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:18.906609  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:18.947030  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:19.027430  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:19.187896  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:19.508534  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:20.149659  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:21.430089  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-558204 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (46.605353258s)
--- PASS: TestFunctional/serial/StartWithProxy (46.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558204 --alsologtostderr -v=8
E1024 19:08:23.991113  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:29.111422  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:39.352599  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:08:59.833033  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-558204 --alsologtostderr -v=8: (41.889279319s)
functional_test.go:659: soft start took 41.89019127s for "functional-558204" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-558204 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 cache add registry.k8s.io/pause:3.1: (1.069165458s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 cache add registry.k8s.io/pause:3.3: (1.128787364s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-558204 /tmp/TestFunctionalserialCacheCmdcacheadd_local4202140257/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cache add minikube-local-cache-test:functional-558204
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cache delete minikube-local-cache-test:functional-558204
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-558204
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.588919ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 kubectl -- --context functional-558204 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-558204 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558204 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1024 19:09:40.794487  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-558204 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.38646398s)
functional_test.go:757: restart took 31.386589261s for "functional-558204" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-558204 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 logs: (1.648206174s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 logs --file /tmp/TestFunctionalserialLogsFileCmd3503875934/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 logs --file /tmp/TestFunctionalserialLogsFileCmd3503875934/001/logs.txt: (1.622754565s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.72s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-558204 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-558204
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-558204: exit status 115 (377.878476ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31172 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-558204 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-558204 delete -f testdata/invalidsvc.yaml: (1.098710805s)
--- PASS: TestFunctional/serial/InvalidService (4.72s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 config get cpus: exit status 14 (121.162753ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 config get cpus: exit status 14 (91.521836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558204 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-558204 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 517013: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (235.681873ms)

                                                
                                                
-- stdout --
	* [functional-558204] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:10:11.812118  514614 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:10:11.812272  514614 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:11.812285  514614 out.go:309] Setting ErrFile to fd 2...
	I1024 19:10:11.812296  514614 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:11.813034  514614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:10:11.813823  514614 out.go:303] Setting JSON to false
	I1024 19:10:11.815037  514614 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10359,"bootTime":1698164253,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:10:11.815126  514614 start.go:138] virtualization: kvm guest
	I1024 19:10:11.818515  514614 out.go:177] * [functional-558204] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:10:11.822452  514614 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:10:11.822399  514614 notify.go:220] Checking for updates...
	I1024 19:10:11.825018  514614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:10:11.828494  514614 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:10:11.830975  514614 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:10:11.833750  514614 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:10:11.836595  514614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:10:11.840959  514614 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:10:11.841856  514614 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:10:11.873441  514614 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:10:11.873585  514614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:10:11.949916  514614 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-24 19:10:11.937301331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:10:11.950078  514614 docker.go:295] overlay module found
	I1024 19:10:11.954122  514614 out.go:177] * Using the docker driver based on existing profile
	I1024 19:10:11.957022  514614 start.go:298] selected driver: docker
	I1024 19:10:11.957048  514614 start.go:902] validating driver "docker" against &{Name:functional-558204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-558204 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:10:11.957228  514614 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:10:11.962279  514614 out.go:177] 
	W1024 19:10:11.965379  514614 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1024 19:10:11.968630  514614 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558204 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-558204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-558204 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (220.948082ms)

                                                
                                                
-- stdout --
	* [functional-558204] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:10:12.364685  514869 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:10:12.364891  514869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:12.364903  514869 out.go:309] Setting ErrFile to fd 2...
	I1024 19:10:12.364911  514869 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:12.365366  514869 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:10:12.366325  514869 out.go:303] Setting JSON to false
	I1024 19:10:12.367505  514869 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10360,"bootTime":1698164253,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:10:12.367575  514869 start.go:138] virtualization: kvm guest
	I1024 19:10:12.370339  514869 out.go:177] * [functional-558204] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1024 19:10:12.373130  514869 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:10:12.373046  514869 notify.go:220] Checking for updates...
	I1024 19:10:12.375181  514869 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:10:12.377454  514869 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:10:12.379765  514869 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:10:12.381877  514869 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:10:12.384492  514869 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:10:12.387315  514869 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:10:12.387889  514869 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:10:12.412581  514869 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:10:12.412710  514869 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:10:12.479328  514869 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-24 19:10:12.466664228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:10:12.479503  514869 docker.go:295] overlay module found
	I1024 19:10:12.482308  514869 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1024 19:10:12.484356  514869 start.go:298] selected driver: docker
	I1024 19:10:12.484381  514869 start.go:902] validating driver "docker" against &{Name:functional-558204 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-558204 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:10:12.484519  514869 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:10:12.488429  514869 out.go:177] 
	W1024 19:10:12.491310  514869 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1024 19:10:12.493734  514869 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-558204 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-558204 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-5cghq" [331e885f-fb66-4db3-a3a2-7b03805ce608] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-5cghq" [331e885f-fb66-4db3-a3a2-7b03805ce608] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.012994723s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31679
functional_test.go:1674: http://192.168.49.2:31679: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-5cghq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31679
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6cb0820f-6e86-46b2-8ff9-453354437c51] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0152715s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-558204 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-558204 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-558204 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558204 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4d96d03-891e-4888-af92-f2f2bed539be] Pending
helpers_test.go:344: "sp-pod" [d4d96d03-891e-4888-af92-f2f2bed539be] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d4d96d03-891e-4888-af92-f2f2bed539be] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.084540461s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-558204 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-558204 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558204 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [781a26a1-25b2-4540-befc-55197d6235fd] Pending
helpers_test.go:344: "sp-pod" [781a26a1-25b2-4540-befc-55197d6235fd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [781a26a1-25b2-4540-befc-55197d6235fd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.016217028s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-558204 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh -n functional-558204 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 cp functional-558204:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3062231375/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh -n functional-558204 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-558204 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-6j668" [c5f7bf7d-c4a2-4849-85ea-27695d9866fe] Pending
helpers_test.go:344: "mysql-859648c796-6j668" [c5f7bf7d-c4a2-4849-85ea-27695d9866fe] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-6j668" [c5f7bf7d-c4a2-4849-85ea-27695d9866fe] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.014377904s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558204 exec mysql-859648c796-6j668 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558204 exec mysql-859648c796-6j668 -- mysql -ppassword -e "show databases;": exit status 1 (281.748757ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2023/10/24 19:10:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558204 exec mysql-859648c796-6j668 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558204 exec mysql-859648c796-6j668 -- mysql -ppassword -e "show databases;": exit status 1 (189.463614ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558204 exec mysql-859648c796-6j668 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558204 exec mysql-859648c796-6j668 -- mysql -ppassword -e "show databases;": exit status 1 (157.361835ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558204 exec mysql-859648c796-6j668 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/478323/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo cat /etc/test/nested/copy/478323/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/478323.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo cat /etc/ssl/certs/478323.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/478323.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo cat /usr/share/ca-certificates/478323.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4783232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo cat /etc/ssl/certs/4783232.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4783232.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo cat /usr/share/ca-certificates/4783232.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-558204 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh "sudo systemctl is-active docker": exit status 1 (366.970902ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh "sudo systemctl is-active containerd": exit status 1 (348.255171ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 version -o=json --components: (1.487510527s)
--- PASS: TestFunctional/parallel/Version/components (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-558204 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-558204 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-tws4x" [bcd0054a-53f9-4f2e-8cc8-48802cc62b1e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-tws4x" [bcd0054a-53f9-4f2e-8cc8-48802cc62b1e] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.079473302s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558204 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558204 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-558204 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 510958: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-558204 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558204 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-558204
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558204 image ls --format short --alsologtostderr:
I1024 19:10:20.840908  517659 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:20.841188  517659 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:20.841204  517659 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:20.841214  517659 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:20.842000  517659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
I1024 19:10:20.842971  517659 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:20.843144  517659 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:20.843782  517659 cli_runner.go:164] Run: docker container inspect functional-558204 --format={{.State.Status}}
I1024 19:10:20.874868  517659 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:20.875045  517659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558204
I1024 19:10:20.903618  517659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/functional-558204/id_rsa Username:docker}
I1024 19:10:21.146450  517659 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558204 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer  | functional-558204  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | bc649bab30d15 | 191MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 6d1b4fd1b182d | 61.5MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 5374347291230 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 10baa1ca17068 | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | bfc896cf80fba | 74.7MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | alpine             | 661daf9bcac82 | 44.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558204 image ls --format table --alsologtostderr:
I1024 19:10:21.944869  517886 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:21.945195  517886 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:21.945207  517886 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:21.945215  517886 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:21.945552  517886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
I1024 19:10:21.946428  517886 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:21.946596  517886 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:21.947239  517886 cli_runner.go:164] Run: docker container inspect functional-558204 --format={{.State.Status}}
I1024 19:10:21.974969  517886 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:21.975044  517886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558204
I1024 19:10:21.998019  517886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/functional-558204/id_rsa Username:docker}
I1024 19:10:22.247267  517886 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558204 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-558204"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8
s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"661daf9bcac824a4be78d50e09fdb7c5d3755e78295c71e1004385244c0c97b1","repoDigests":["docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759a
bbdb6b39e5bbd194ce55ebaf","docker.io/library/nginx@sha256:fc2d39a0d6565db4bd6c94aa7b5efc2da67734cc97388afb5c72369a24bcfaea"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44434729"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"123188534"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906
b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e
126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"bc649bab30d150c10a84031a7f54c99a8c31069c7bc324a7899d7125d59cc973","repoDigests":["docker.io/library/nginx@sha256:3a12fc354e3c4dd62196a809e52a5d2f8f385b52fcc62145b0efec5954bb8fa1","docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595"],"repoTags":["docker.io/library/nginx:latest"],"size":"190917887"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":["registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"127165392"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":["registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765
064ca3c45003de97eb8","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"74691991"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"61498678"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558204 image ls --format json --alsologtostderr:
I1024 19:10:21.361297  517742 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:21.361443  517742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:21.361457  517742 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:21.361465  517742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:21.361804  517742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
I1024 19:10:21.362773  517742 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:21.362941  517742 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:21.363542  517742 cli_runner.go:164] Run: docker container inspect functional-558204 --format={{.State.Status}}
I1024 19:10:21.390545  517742 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:21.390598  517742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558204
I1024 19:10:21.409170  517742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/functional-558204/id_rsa Username:docker}
I1024 19:10:21.647710  517742 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558204 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:86d5311d13774d7beed6fbf181db7d8ace26d1b3d1c85b72c9f9b4d585d409ab
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "127165392"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:695da0e5a1be2d0f94af107e4f29faaa958f1c90e4765064ca3c45003de97eb8
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "74691991"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:fbe8838032fa8f01b36282417596119a481e5bc11eca89270073122f0cc90374
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "61498678"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:dd4817791cfaa85482f27af472e4b100e362134530a7c4bae50f3ce10729d75d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "123188534"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-558204
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 661daf9bcac824a4be78d50e09fdb7c5d3755e78295c71e1004385244c0c97b1
repoDigests:
- docker.io/library/nginx@sha256:7272a6e0f728e95c8641d219676605f3b9e4759abbdb6b39e5bbd194ce55ebaf
- docker.io/library/nginx@sha256:fc2d39a0d6565db4bd6c94aa7b5efc2da67734cc97388afb5c72369a24bcfaea
repoTags:
- docker.io/library/nginx:alpine
size: "44434729"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: bc649bab30d150c10a84031a7f54c99a8c31069c7bc324a7899d7125d59cc973
repoDigests:
- docker.io/library/nginx@sha256:3a12fc354e3c4dd62196a809e52a5d2f8f385b52fcc62145b0efec5954bb8fa1
- docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595
repoTags:
- docker.io/library/nginx:latest
size: "190917887"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558204 image ls --format yaml --alsologtostderr:
I1024 19:10:20.840594  517660 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:20.841119  517660 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:20.841133  517660 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:20.841141  517660 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:20.841523  517660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
I1024 19:10:20.842573  517660 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:20.842836  517660 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:20.843980  517660 cli_runner.go:164] Run: docker container inspect functional-558204 --format={{.State.Status}}
I1024 19:10:20.878902  517660 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:20.879003  517660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558204
I1024 19:10:20.903085  517660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/functional-558204/id_rsa Username:docker}
I1024 19:10:21.145978  517660 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh pgrep buildkitd: exit status 1 (473.90528ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image build -t localhost/my-image:functional-558204 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 image build -t localhost/my-image:functional-558204 testdata/build --alsologtostderr: (7.345740044s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-558204 image build -t localhost/my-image:functional-558204 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 38aaac2b5d2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-558204
--> e351dccb8c8
Successfully tagged localhost/my-image:functional-558204
e351dccb8c8fe3e5719f9b3a463a78710b7afe10286a5761e7add65f69f7f388
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-558204 image build -t localhost/my-image:functional-558204 testdata/build --alsologtostderr:
I1024 19:10:21.826210  517859 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:21.826349  517859 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:21.826353  517859 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:21.826358  517859 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:21.826545  517859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
I1024 19:10:21.827145  517859 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:21.827761  517859 config.go:182] Loaded profile config "functional-558204": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:10:21.828262  517859 cli_runner.go:164] Run: docker container inspect functional-558204 --format={{.State.Status}}
I1024 19:10:21.854215  517859 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:21.854267  517859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-558204
I1024 19:10:21.879481  517859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33205 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/functional-558204/id_rsa Username:docker}
I1024 19:10:22.046842  517859 build_images.go:151] Building image from path: /tmp/build.4139937567.tar
I1024 19:10:22.046932  517859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1024 19:10:22.060954  517859 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4139937567.tar
I1024 19:10:22.068521  517859 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4139937567.tar: stat -c "%s %y" /var/lib/minikube/build/build.4139937567.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4139937567.tar': No such file or directory
I1024 19:10:22.068664  517859 ssh_runner.go:362] scp /tmp/build.4139937567.tar --> /var/lib/minikube/build/build.4139937567.tar (3072 bytes)
I1024 19:10:22.165017  517859 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4139937567
I1024 19:10:22.177090  517859 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4139937567 -xf /var/lib/minikube/build/build.4139937567.tar
I1024 19:10:22.254739  517859 crio.go:297] Building image: /var/lib/minikube/build/build.4139937567
I1024 19:10:22.254853  517859 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-558204 /var/lib/minikube/build/build.4139937567 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1024 19:10:29.065852  517859 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-558204 /var/lib/minikube/build/build.4139937567 --cgroup-manager=cgroupfs: (6.810942408s)
I1024 19:10:29.065979  517859 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4139937567
I1024 19:10:29.077323  517859 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4139937567.tar
I1024 19:10:29.087624  517859 build_images.go:207] Built localhost/my-image:functional-558204 from /tmp/build.4139937567.tar
I1024 19:10:29.087667  517859 build_images.go:123] succeeded building to: functional-558204
I1024 19:10:29.087674  517859 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.032951714s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-558204
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-558204 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-558204 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0d9d3380-1c85-411a-aafd-e9b394efe912] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0d9d3380-1c85-411a-aafd-e9b394efe912] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.059839468s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image load --daemon gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 image load --daemon gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr: (5.545497092s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image load --daemon gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 image load --daemon gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr: (4.882238338s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 service list -o json
functional_test.go:1493: Took "404.965365ms" to run "out/minikube-linux-amd64 -p functional-558204 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30480
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30480
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.009086872s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-558204
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image load --daemon gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 image load --daemon gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr: (4.359953227s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-558204 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.220.217 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-558204 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "354.1215ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "73.931869ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "334.768957ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "71.435244ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdany-port3859189273/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698174607103491478" to /tmp/TestFunctionalparallelMountCmdany-port3859189273/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698174607103491478" to /tmp/TestFunctionalparallelMountCmdany-port3859189273/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698174607103491478" to /tmp/TestFunctionalparallelMountCmdany-port3859189273/001/test-1698174607103491478
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (351.874014ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 24 19:10 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 24 19:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 24 19:10 test-1698174607103491478
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh cat /mount-9p/test-1698174607103491478
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-558204 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [91a0c2b9-a36c-4713-883d-30fa10a42f88] Pending
helpers_test.go:344: "busybox-mount" [91a0c2b9-a36c-4713-883d-30fa10a42f88] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [91a0c2b9-a36c-4713-883d-30fa10a42f88] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [91a0c2b9-a36c-4713-883d-30fa10a42f88] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.013703873s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-558204 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdany-port3859189273/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image save gcr.io/google-containers/addon-resizer:functional-558204 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 image save gcr.io/google-containers/addon-resizer:functional-558204 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.022763851s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image rm gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-558204 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.235040496s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-558204
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 image save --daemon gcr.io/google-containers/addon-resizer:functional-558204 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-558204
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdspecific-port570028270/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (317.574527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdspecific-port570028270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh "sudo umount -f /mount-9p": exit status 1 (513.434033ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-558204 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdspecific-port570028270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T" /mount1: exit status 1 (690.88199ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-558204 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-558204 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-558204 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2907756730/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.87s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-558204
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-558204
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-558204
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (75.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-462645 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1024 19:11:02.715416  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-462645 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m15.278789244s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (75.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-462645 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-462645 addons enable ingress --alsologtostderr -v=5: (11.483310875s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-462645 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-589311 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1024 19:15:31.627600  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:16:12.588566  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-589311 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m9.374136806s)
--- PASS: TestJSONOutput/start/Command (69.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-589311 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-589311 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-589311 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-589311 --output=json --user=testUser: (6.012335916s)
--- PASS: TestJSONOutput/stop/Command (6.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-974404 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-974404 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.944598ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0966b988-03b7-481e-bf41-7db62095f742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-974404] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc858240-b71b-4db0-a920-1740f5f8e0a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17485"}}
	{"specversion":"1.0","id":"68425c46-25f4-4014-b870-27e8b9080a36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a521ca02-f04b-4be6-99a4-cc1db9c4916e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig"}}
	{"specversion":"1.0","id":"0074cb2c-405c-4183-9c63-dc41eec57f97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube"}}
	{"specversion":"1.0","id":"ca0f104a-95a3-44e9-9db3-ccfd354f48f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3198e1db-1ed7-49c0-a822-77e789201d3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"332b0cc0-8bc4-42bb-913a-46620055a395","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-974404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-974404
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-222200 --network=
E1024 19:17:11.836194  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:11.841624  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:11.852088  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:11.872605  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:11.912995  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:11.993553  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:12.154085  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:12.474952  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:13.116165  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:14.397487  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-222200 --network=: (33.608672873s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-222200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-222200
E1024 19:17:16.958334  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-222200: (2.217000938s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.85s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-030847 --network=bridge
E1024 19:17:22.078875  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:32.320229  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:17:34.509338  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-030847 --network=bridge: (25.178296018s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-030847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-030847
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-030847: (2.056491421s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.26s)

                                                
                                    
x
+
TestKicExistingNetwork (28.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-217255 --network=existing-network
E1024 19:17:52.800665  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-217255 --network=existing-network: (26.192285382s)
helpers_test.go:175: Cleaning up "existing-network-217255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-217255
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-217255: (2.078146984s)
--- PASS: TestKicExistingNetwork (28.43s)

                                                
                                    
x
+
TestKicCustomSubnet (27.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-762137 --subnet=192.168.60.0/24
E1024 19:18:18.869628  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:18:33.761156  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-762137 --subnet=192.168.60.0/24: (24.959933126s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-762137 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-762137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-762137
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-762137: (2.1394574s)
--- PASS: TestKicCustomSubnet (27.12s)

                                                
                                    
x
+
TestKicStaticIP (30.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-422180 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-422180 --static-ip=192.168.200.200: (27.732699501s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-422180 ip
helpers_test.go:175: Cleaning up "static-ip-422180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-422180
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-422180: (2.271898715s)
--- PASS: TestKicStaticIP (30.18s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (57.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-738779 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-738779 --driver=docker  --container-runtime=crio: (26.458259489s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-740899 --driver=docker  --container-runtime=crio
E1024 19:19:50.664529  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:19:55.682061  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-740899 --driver=docker  --container-runtime=crio: (26.321213336s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-738779
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-740899
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-740899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-740899
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-740899: (2.005139891s)
helpers_test.go:175: Cleaning up "first-738779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-738779
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-738779: (2.011023268s)
--- PASS: TestMinikubeProfile (57.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-173651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-173651 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.200560971s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-173651 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-193912 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1024 19:20:18.350487  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-193912 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.671482367s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-193912 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-173651 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-173651 --alsologtostderr -v=5: (1.729300728s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-193912 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-193912
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-193912: (1.24487405s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-193912
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-193912: (6.22648331s)
--- PASS: TestMountStart/serial/RestartStopped (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-193912 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-961484 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-961484 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m9.842246096s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-961484 -- rollout status deployment/busybox: (2.322995605s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-j2cch -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-px9mp -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-j2cch -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-px9mp -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-j2cch -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-961484 -- exec busybox-5bc68d56bd-px9mp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.52s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-961484 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-961484 -v 3 --alsologtostderr: (17.217643284s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.87s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp testdata/cp-test.txt multinode-961484:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3698564994/001/cp-test_multinode-961484.txt
E1024 19:22:11.836384  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484:/home/docker/cp-test.txt multinode-961484-m02:/home/docker/cp-test_multinode-961484_multinode-961484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m02 "sudo cat /home/docker/cp-test_multinode-961484_multinode-961484-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484:/home/docker/cp-test.txt multinode-961484-m03:/home/docker/cp-test_multinode-961484_multinode-961484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m03 "sudo cat /home/docker/cp-test_multinode-961484_multinode-961484-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp testdata/cp-test.txt multinode-961484-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3698564994/001/cp-test_multinode-961484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484-m02:/home/docker/cp-test.txt multinode-961484:/home/docker/cp-test_multinode-961484-m02_multinode-961484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484 "sudo cat /home/docker/cp-test_multinode-961484-m02_multinode-961484.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484-m02:/home/docker/cp-test.txt multinode-961484-m03:/home/docker/cp-test_multinode-961484-m02_multinode-961484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m03 "sudo cat /home/docker/cp-test_multinode-961484-m02_multinode-961484-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp testdata/cp-test.txt multinode-961484-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3698564994/001/cp-test_multinode-961484-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484-m03:/home/docker/cp-test.txt multinode-961484:/home/docker/cp-test_multinode-961484-m03_multinode-961484.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484 "sudo cat /home/docker/cp-test_multinode-961484-m03_multinode-961484.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 cp multinode-961484-m03:/home/docker/cp-test.txt multinode-961484-m02:/home/docker/cp-test_multinode-961484-m03_multinode-961484-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 ssh -n multinode-961484-m02 "sudo cat /home/docker/cp-test_multinode-961484-m03_multinode-961484-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-961484 node stop m03: (1.25724058s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-961484 status: exit status 7 (543.386733ms)

                                                
                                                
-- stdout --
	multinode-961484
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-961484-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-961484-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr: exit status 7 (543.412665ms)

                                                
                                                
-- stdout --
	multinode-961484
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-961484-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-961484-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:22:22.939449  578341 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:22:22.940628  578341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:22:22.940655  578341 out.go:309] Setting ErrFile to fd 2...
	I1024 19:22:22.940662  578341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:22:22.941057  578341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:22:22.941393  578341 out.go:303] Setting JSON to false
	I1024 19:22:22.941453  578341 mustload.go:65] Loading cluster: multinode-961484
	I1024 19:22:22.941521  578341 notify.go:220] Checking for updates...
	I1024 19:22:22.942517  578341 config.go:182] Loaded profile config "multinode-961484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:22:22.942557  578341 status.go:255] checking status of multinode-961484 ...
	I1024 19:22:22.943431  578341 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:22:22.968715  578341 status.go:330] multinode-961484 host status = "Running" (err=<nil>)
	I1024 19:22:22.968746  578341 host.go:66] Checking if "multinode-961484" exists ...
	I1024 19:22:22.969126  578341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484
	I1024 19:22:22.989291  578341 host.go:66] Checking if "multinode-961484" exists ...
	I1024 19:22:22.989589  578341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:22:22.989637  578341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484
	I1024 19:22:23.012173  578341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33270 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484/id_rsa Username:docker}
	I1024 19:22:23.102609  578341 ssh_runner.go:195] Run: systemctl --version
	I1024 19:22:23.106867  578341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:22:23.117851  578341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:22:23.180766  578341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:56 SystemTime:2023-10-24 19:22:23.169216144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:22:23.181685  578341 kubeconfig.go:92] found "multinode-961484" server: "https://192.168.58.2:8443"
	I1024 19:22:23.181724  578341 api_server.go:166] Checking apiserver status ...
	I1024 19:22:23.181775  578341 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:22:23.196247  578341 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	I1024 19:22:23.207700  578341 api_server.go:182] apiserver freezer: "5:freezer:/docker/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/crio/crio-978b5b41effe96056c2d4b38df3bda868b88f2456201037bc80615dd06214def"
	I1024 19:22:23.207784  578341 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a82cc8c1628378c5b92c3db0c1014a567f91c1a1c2d35aa03f63b3ca66caeebb/crio/crio-978b5b41effe96056c2d4b38df3bda868b88f2456201037bc80615dd06214def/freezer.state
	I1024 19:22:23.217352  578341 api_server.go:204] freezer state: "THAWED"
	I1024 19:22:23.217391  578341 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1024 19:22:23.224068  578341 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1024 19:22:23.224110  578341 status.go:421] multinode-961484 apiserver status = Running (err=<nil>)
	I1024 19:22:23.224124  578341 status.go:257] multinode-961484 status: &{Name:multinode-961484 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:22:23.224167  578341 status.go:255] checking status of multinode-961484-m02 ...
	I1024 19:22:23.224473  578341 cli_runner.go:164] Run: docker container inspect multinode-961484-m02 --format={{.State.Status}}
	I1024 19:22:23.245645  578341 status.go:330] multinode-961484-m02 host status = "Running" (err=<nil>)
	I1024 19:22:23.245674  578341 host.go:66] Checking if "multinode-961484-m02" exists ...
	I1024 19:22:23.245997  578341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-961484-m02
	I1024 19:22:23.264525  578341 host.go:66] Checking if "multinode-961484-m02" exists ...
	I1024 19:22:23.264914  578341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:22:23.264956  578341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-961484-m02
	I1024 19:22:23.283031  578341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/17485-471553/.minikube/machines/multinode-961484-m02/id_rsa Username:docker}
	I1024 19:22:23.374035  578341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:22:23.384826  578341 status.go:257] multinode-961484-m02 status: &{Name:multinode-961484-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:22:23.384862  578341 status.go:255] checking status of multinode-961484-m03 ...
	I1024 19:22:23.385246  578341 cli_runner.go:164] Run: docker container inspect multinode-961484-m03 --format={{.State.Status}}
	I1024 19:22:23.402812  578341 status.go:330] multinode-961484-m03 host status = "Stopped" (err=<nil>)
	I1024 19:22:23.402839  578341 status.go:343] host is not running, skipping remaining checks
	I1024 19:22:23.402848  578341 status.go:257] multinode-961484-m03 status: &{Name:multinode-961484-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-961484 node start m03 --alsologtostderr: (10.641792997s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (114.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-961484
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-961484
E1024 19:22:39.522728  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-961484: (25.14688873s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-961484 --wait=true -v=8 --alsologtostderr
E1024 19:23:18.868613  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-961484 --wait=true -v=8 --alsologtostderr: (1m29.127568439s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-961484
--- PASS: TestMultiNode/serial/RestartKeepsNodes (114.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-961484 node delete m03: (4.270238395s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 stop
E1024 19:24:41.918391  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:24:50.664246  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-961484 stop: (23.937489214s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-961484 status: exit status 7 (112.737111ms)

                                                
                                                
-- stdout --
	multinode-961484
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-961484-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr: exit status 7 (132.625717ms)

                                                
                                                
-- stdout --
	multinode-961484
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-961484-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:24:58.302196  588469 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:24:58.302366  588469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:24:58.302376  588469 out.go:309] Setting ErrFile to fd 2...
	I1024 19:24:58.302383  588469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:24:58.302608  588469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:24:58.302844  588469 out.go:303] Setting JSON to false
	I1024 19:24:58.302883  588469 mustload.go:65] Loading cluster: multinode-961484
	I1024 19:24:58.303045  588469 notify.go:220] Checking for updates...
	I1024 19:24:58.303612  588469 config.go:182] Loaded profile config "multinode-961484": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:58.303663  588469 status.go:255] checking status of multinode-961484 ...
	I1024 19:24:58.304402  588469 cli_runner.go:164] Run: docker container inspect multinode-961484 --format={{.State.Status}}
	I1024 19:24:58.331261  588469 status.go:330] multinode-961484 host status = "Stopped" (err=<nil>)
	I1024 19:24:58.331366  588469 status.go:343] host is not running, skipping remaining checks
	I1024 19:24:58.331376  588469 status.go:257] multinode-961484 status: &{Name:multinode-961484 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:24:58.331434  588469 status.go:255] checking status of multinode-961484-m02 ...
	I1024 19:24:58.331698  588469 cli_runner.go:164] Run: docker container inspect multinode-961484-m02 --format={{.State.Status}}
	I1024 19:24:58.354748  588469 status.go:330] multinode-961484-m02 host status = "Stopped" (err=<nil>)
	I1024 19:24:58.354807  588469 status.go:343] host is not running, skipping remaining checks
	I1024 19:24:58.354816  588469 status.go:257] multinode-961484-m02 status: &{Name:multinode-961484-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-961484 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-961484 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m19.767520839s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-961484 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.44s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (28.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-961484
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-961484-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-961484-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.267288ms)

                                                
                                                
-- stdout --
	* [multinode-961484-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-961484-m02' is duplicated with machine name 'multinode-961484-m02' in profile 'multinode-961484'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-961484-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-961484-m03 --driver=docker  --container-runtime=crio: (26.369890074s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-961484
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-961484: exit status 80 (327.143715ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-961484
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-961484-m03 already exists in multinode-961484-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-961484-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-961484-m03: (2.01139598s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (28.87s)

                                                
                                    
x
+
TestPreload (152.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-187373 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1024 19:27:11.835951  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-187373 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m14.417133907s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-187373 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-187373
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-187373: (5.790466006s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-187373 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1024 19:28:18.869054  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-187373 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m9.144667779s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-187373 image list
helpers_test.go:175: Cleaning up "test-preload-187373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-187373
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-187373: (2.43019805s)
--- PASS: TestPreload (152.98s)

                                                
                                    
x
+
TestScheduledStopUnix (102.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-965670 --memory=2048 --driver=docker  --container-runtime=crio
E1024 19:29:50.664956  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-965670 --memory=2048 --driver=docker  --container-runtime=crio: (26.274537523s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965670 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-965670 -n scheduled-stop-965670
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965670 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965670 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-965670 -n scheduled-stop-965670
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-965670
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-965670 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-965670
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-965670: exit status 7 (93.238209ms)

                                                
                                                
-- stdout --
	scheduled-stop-965670
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-965670 -n scheduled-stop-965670
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-965670 -n scheduled-stop-965670: exit status 7 (91.128614ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-965670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-965670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-965670: (5.095689455s)
--- PASS: TestScheduledStopUnix (102.98s)

                                                
                                    
x
+
TestInsufficientStorage (14.15s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-444942 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E1024 19:31:13.712547  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-444942 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.582108599s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1938398d-9493-42ba-bd48-41e82c3765d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-444942] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12653e6e-1329-4753-80d5-4c6ec705de82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17485"}}
	{"specversion":"1.0","id":"f86c1bab-8acf-4e98-8d84-e74d400eb984","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f689c8e6-b3a4-457a-b7a6-e39157a652b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig"}}
	{"specversion":"1.0","id":"5121f6da-2fc0-480e-8396-eba0b10c8a32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube"}}
	{"specversion":"1.0","id":"98bd11e8-a481-477e-a79f-d091acad7968","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3dec3db5-31f3-4dd7-b5f0-7a1b942d17a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bfbfe782-5cc4-437a-ab4a-aaa6e7218fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"df04a41a-1419-49bd-ac55-78d69356302d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b699596d-1ab3-487e-8dff-6b1463e872db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f4fc5d7-c7d8-4db9-8856-3c02609cf291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0d5e4b49-f549-44ed-bd78-c089be5b4333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-444942 in cluster insufficient-storage-444942","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa2322a1-26e8-41af-9dc6-c5fa25dc3ef7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8a75f18-0ecf-4301-aa78-8ba2bb89dfd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e7ff766-c492-4cac-a163-4502876095db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-444942 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-444942 --output=json --layout=cluster: exit status 7 (316.445219ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-444942","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-444942","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:31:21.897565  610347 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-444942" does not appear in /home/jenkins/minikube-integration/17485-471553/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-444942 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-444942 --output=json --layout=cluster: exit status 7 (294.703899ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-444942","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-444942","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:31:22.195848  610434 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-444942" does not appear in /home/jenkins/minikube-integration/17485-471553/kubeconfig
	E1024 19:31:22.206811  610434 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/insufficient-storage-444942/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-444942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-444942
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-444942: (1.952075933s)
--- PASS: TestInsufficientStorage (14.15s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.106508012s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-830809
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-830809: (1.333559123s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-830809 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-830809 status --format={{.Host}}: exit status 7 (132.39195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1024 19:33:34.883452  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.470623225s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-830809 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (134.892177ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-830809] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-830809
	    minikube start -p kubernetes-upgrade-830809 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8308092 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-830809 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-830809 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.320825832s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-830809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-830809
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-830809: (2.307586461s)
--- PASS: TestKubernetesUpgrade (353.88s)

                                                
                                    
x
+
TestMissingContainerUpgrade (158.18s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.2876357073.exe start -p missing-upgrade-487285 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.2876357073.exe start -p missing-upgrade-487285 --memory=2200 --driver=docker  --container-runtime=crio: (1m28.601678497s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-487285
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-487285: (4.590424177s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-487285
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-487285 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-487285 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (59.802953525s)
helpers_test.go:175: Cleaning up "missing-upgrade-487285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-487285
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-487285: (4.64070594s)
--- PASS: TestMissingContainerUpgrade (158.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-331263 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-331263 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (135.080974ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-331263] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-331263 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-331263 --driver=docker  --container-runtime=crio: (38.857496963s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-331263 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-331263 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-331263 --no-kubernetes --driver=docker  --container-runtime=crio: (5.097714044s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-331263 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-331263 status -o json: exit status 2 (369.181873ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-331263","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-331263
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-331263: (2.665594293s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-331263 --no-kubernetes --driver=docker  --container-runtime=crio
E1024 19:32:11.836551  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-331263 --no-kubernetes --driver=docker  --container-runtime=crio: (10.567898599s)
--- PASS: TestNoKubernetes/serial/Start (10.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-331263 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-331263 "sudo systemctl is-active --quiet service kubelet": exit status 1 (337.706314ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-331263
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-331263: (1.290220366s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-331263 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-331263 --driver=docker  --container-runtime=crio: (7.849428711s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-331263 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-331263 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.800653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-878231
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestPause/serial/Start (40.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-639553 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1024 19:34:50.665092  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-639553 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (40.566795468s)
--- PASS: TestPause/serial/Start (40.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-973203 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-973203 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (218.873967ms)

                                                
                                                
-- stdout --
	* [false-973203] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:35:29.542002  662507 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:35:29.542293  662507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:35:29.542305  662507 out.go:309] Setting ErrFile to fd 2...
	I1024 19:35:29.542311  662507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:35:29.542576  662507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-471553/.minikube/bin
	I1024 19:35:29.543236  662507 out.go:303] Setting JSON to false
	I1024 19:35:29.544950  662507 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11877,"bootTime":1698164253,"procs":550,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:35:29.545084  662507 start.go:138] virtualization: kvm guest
	I1024 19:35:29.548424  662507 out.go:177] * [false-973203] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:35:29.550892  662507 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:35:29.550978  662507 notify.go:220] Checking for updates...
	I1024 19:35:29.553515  662507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:35:29.555921  662507 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-471553/kubeconfig
	I1024 19:35:29.558201  662507 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-471553/.minikube
	I1024 19:35:29.560581  662507 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:35:29.562915  662507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:35:29.565575  662507 config.go:182] Loaded profile config "cert-expiration-381520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:29.565699  662507 config.go:182] Loaded profile config "kubernetes-upgrade-830809": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:29.565842  662507 config.go:182] Loaded profile config "pause-639553": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:35:29.565938  662507 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:35:29.603293  662507 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:35:29.603380  662507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:35:29.671186  662507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:71 OomKillDisable:true NGoroutines:66 SystemTime:2023-10-24 19:35:29.658715595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648046080 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1024 19:35:29.671279  662507 docker.go:295] overlay module found
	I1024 19:35:29.674607  662507 out.go:177] * Using the docker driver based on user configuration
	I1024 19:35:29.676285  662507 start.go:298] selected driver: docker
	I1024 19:35:29.676307  662507 start.go:902] validating driver "docker" against <nil>
	I1024 19:35:29.676323  662507 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:35:29.678838  662507 out.go:177] 
	W1024 19:35:29.680486  662507 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1024 19:35:29.682309  662507 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-973203 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-973203" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:33:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-830809
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:35:19 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-639553
contexts:
- context:
cluster: kubernetes-upgrade-830809
user: kubernetes-upgrade-830809
name: kubernetes-upgrade-830809
- context:
cluster: pause-639553
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:35:19 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-639553
name: pause-639553
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-830809
user:
client-certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/kubernetes-upgrade-830809/client.crt
client-key: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/kubernetes-upgrade-830809/client.key
- name: pause-639553
user:
client-certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.crt
client-key: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-973203

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-973203"

                                                
                                                
----------------------- debugLogs end: false-973203 [took: 4.540440975s] --------------------------------
helpers_test.go:175: Cleaning up "false-973203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-973203
--- PASS: TestNetworkPlugins/group/false (4.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-880692 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-880692 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m5.214574097s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-539193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-539193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m4.130691492s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-099862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 19:37:11.836901  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-099862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (49.01048048s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-099862 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [84a975d8-5fd9-4e0b-a5ef-356ca423d455] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [84a975d8-5fd9-4e0b-a5ef-356ca423d455] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.020407964s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-099862 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-539193 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [123425b7-268e-44ab-876f-554e874b645f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [123425b7-268e-44ab-876f-554e874b645f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.01837394s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-539193 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-099862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-099862 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.095558891s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-099862 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-099862 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-099862 --alsologtostderr -v=3: (12.095976127s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-539193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-539193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059913995s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-539193 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-539193 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-539193 --alsologtostderr -v=3: (12.136163114s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-099862 -n embed-certs-099862
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-099862 -n embed-certs-099862: exit status 7 (91.850929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-099862 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-099862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-099862 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m37.533093196s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-099862 -n embed-certs-099862
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-539193 -n no-preload-539193
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-539193 -n no-preload-539193: exit status 7 (118.75342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-539193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (341.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-539193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-539193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m40.637498151s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-539193 -n no-preload-539193
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (341.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-880692 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8f1c89a-38d1-441d-942d-f16e74a118f9] Pending
helpers_test.go:344: "busybox" [e8f1c89a-38d1-441d-942d-f16e74a118f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e8f1c89a-38d1-441d-942d-f16e74a118f9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.027479459s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-880692 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-880692 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-880692 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-880692 --alsologtostderr -v=3
E1024 19:38:18.869011  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-880692 --alsologtostderr -v=3: (12.186067639s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-801499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-801499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (44.873141534s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-880692 -n old-k8s-version-880692
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-880692 -n old-k8s-version-880692: exit status 7 (131.03354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-880692 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (422.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-880692 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-880692 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m2.126485485s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-880692 -n old-k8s-version-880692
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (422.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-801499 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f631fd0e-804c-49f9-a75e-2f1e925436c0] Pending
helpers_test.go:344: "busybox" [f631fd0e-804c-49f9-a75e-2f1e925436c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f631fd0e-804c-49f9-a75e-2f1e925436c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.020226947s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-801499 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-801499 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-801499 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008000868s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-801499 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-801499 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-801499 --alsologtostderr -v=3: (12.083126871s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499: exit status 7 (101.087079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-801499 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-801499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 19:39:50.664561  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
E1024 19:41:21.918987  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
E1024 19:42:11.835926  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory
E1024 19:43:18.869527  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-801499 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m43.889533249s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (344.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4s4b5" [06f5a9ad-ca25-4304-a3a2-f2ac31393c21] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4s4b5" [06f5a9ad-ca25-4304-a3a2-f2ac31393c21] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.025257094s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8p8k9" [8e49df82-2084-4942-a246-08f1204b6603] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8p8k9" [8e49df82-2084-4942-a246-08f1204b6603] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.076909016s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4s4b5" [06f5a9ad-ca25-4304-a3a2-f2ac31393c21] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009955794s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-099862 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8p8k9" [8e49df82-2084-4942-a246-08f1204b6603] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012651098s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-539193 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-099862 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-099862 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-099862 -n embed-certs-099862
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-099862 -n embed-certs-099862: exit status 2 (338.384574ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-099862 -n embed-certs-099862
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-099862 -n embed-certs-099862: exit status 2 (342.056374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-099862 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-099862 -n embed-certs-099862
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-099862 -n embed-certs-099862
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-539193 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-539193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-539193 -n no-preload-539193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-539193 -n no-preload-539193: exit status 2 (415.931349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-539193 -n no-preload-539193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-539193 -n no-preload-539193: exit status 2 (372.800235ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-539193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-539193 --alsologtostderr -v=1: (1.04735011s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-539193 -n no-preload-539193
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-539193 -n no-preload-539193
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-838678 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-838678 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (39.021784259s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (47.628751218s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-838678 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-838678 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.244829992s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-838678 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-838678 --alsologtostderr -v=3: (1.367067668s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-838678 -n newest-cni-838678
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-838678 -n newest-cni-838678: exit status 7 (110.592444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-838678 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-838678 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-838678 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (28.607011972s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-838678 -n newest-cni-838678
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-973203 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-973203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lrfhs" [3ee78791-b046-4e15-91f0-d0dcd4f4ab70] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lrfhs" [3ee78791-b046-4e15-91f0-d0dcd4f4ab70] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.015340504s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-973203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-838678 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-838678 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-838678 -n newest-cni-838678
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-838678 -n newest-cni-838678: exit status 2 (392.511847ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-838678 -n newest-cni-838678
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-838678 -n newest-cni-838678: exit status 2 (380.64677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-838678 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-838678 -n newest-cni-838678
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-838678 -n newest-cni-838678
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m15.448284213s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m12.851050777s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8zjlp" [5c671f71-15fa-4a34-a346-e1e79bec71e0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8zjlp" [5c671f71-15fa-4a34-a346-e1e79bec71e0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.022975135s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8zjlp" [5c671f71-15fa-4a34-a346-e1e79bec71e0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01397954s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-801499 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-96jnm" [c187a0f1-18c3-4231-9ab0-2e5b2388281b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018406636s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-801499 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-801499 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499: exit status 2 (346.677529ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499: exit status 2 (351.307706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-801499 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-801499 -n default-k8s-diff-port-801499
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-96jnm" [c187a0f1-18c3-4231-9ab0-2e5b2388281b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0137287s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-880692 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-880692 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-880692 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-880692 --alsologtostderr -v=1: (1.068936531s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-880692 -n old-k8s-version-880692
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-880692 -n old-k8s-version-880692: exit status 2 (462.258625ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-880692 -n old-k8s-version-880692
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-880692 -n old-k8s-version-880692: exit status 2 (469.196261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-880692 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-880692 -n old-k8s-version-880692
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-880692 -n old-k8s-version-880692
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.89s)
E1024 19:47:11.835975  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/ingress-addon-legacy-462645/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m2.149359047s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.991661127s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-8n2bf" [505b295d-bbf8-42a9-af06-51b920b789e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020387148s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-973203 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cdxrq" [cab3b515-bf02-4872-891f-085370673407] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028078102s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-973203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2c9st" [9a646a42-f998-4ec6-82c7-3bfbf003a041] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2c9st" [9a646a42-f998-4ec6-82c7-3bfbf003a041] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.012292214s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-973203 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-973203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-twrvn" [7f62b176-8ebb-43fe-9be2-7297351277ff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-twrvn" [7f62b176-8ebb-43fe-9be2-7297351277ff] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.013218556s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-973203 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-973203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bbdnc" [31a72518-41a6-42ae-8ef6-56d4efecad0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bbdnc" [31a72518-41a6-42ae-8ef6-56d4efecad0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.012946529s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-973203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-973203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-973203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-973203 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-973203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rvnbl" [e0f5acb0-99af-4075-8024-df3d4f4d6be8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rvnbl" [e0f5acb0-99af-4075-8024-df3d4f4d6be8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.01179901s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-973203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.009689144s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-973203 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.742494273s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-973203 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-973203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z65rz" [2bf5fc97-65af-4711-a92d-71f7a9a7e604] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 19:47:53.713363  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/functional-558204/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-z65rz" [2bf5fc97-65af-4711-a92d-71f7a9a7e604] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011518742s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (32.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-973203 exec deployment/netcat -- nslookup kubernetes.default
E1024 19:48:02.904256  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/no-preload-539193/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-973203 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.174117032s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1024 19:48:18.778498  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:18.868875  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/addons-291433/client.crt: no such file or directory
net_test.go:175: (dbg) Run:  kubectl --context bridge-973203 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-973203 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.161198264s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-973203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (32.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lvdts" [3fe7b614-05bb-45d4-8f30-8b5713d10297] Running
E1024 19:48:08.536176  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:08.541595  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:08.552049  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:08.572433  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:08.613227  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:08.693531  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:08.854327  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:09.175012  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:09.816177  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
E1024 19:48:11.097308  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.018846313s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-973203 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-973203 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vt7x7" [c7fa62fb-41db-4947-a9cf-b5f309d00aed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 19:48:13.657546  478323 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/old-k8s-version-880692/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vt7x7" [c7fa62fb-41db-4947-a9cf-b5f309d00aed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.012406192s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-973203 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-973203 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (24/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-154030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-154030
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-973203 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-973203" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:32:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-381520
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:33:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-830809
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:35:19 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-639553
contexts:
- context:
cluster: cert-expiration-381520
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:32:00 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-381520
name: cert-expiration-381520
- context:
cluster: kubernetes-upgrade-830809
user: kubernetes-upgrade-830809
name: kubernetes-upgrade-830809
- context:
cluster: pause-639553
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:35:19 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-639553
name: pause-639553
current-context: pause-639553
kind: Config
preferences: {}
users:
- name: cert-expiration-381520
user:
client-certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/cert-expiration-381520/client.crt
client-key: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/cert-expiration-381520/client.key
- name: kubernetes-upgrade-830809
user:
client-certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/kubernetes-upgrade-830809/client.crt
client-key: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/kubernetes-upgrade-830809/client.key
- name: pause-639553
user:
client-certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.crt
client-key: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-973203

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-973203"

                                                
                                                
----------------------- debugLogs end: kubenet-973203 [took: 4.455358873s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-973203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-973203
--- SKIP: TestNetworkPlugins/group/kubenet (4.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-973203 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-973203" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:33:39 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-830809
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-471553/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:35:19 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-639553
contexts:
- context:
cluster: kubernetes-upgrade-830809
user: kubernetes-upgrade-830809
name: kubernetes-upgrade-830809
- context:
cluster: pause-639553
extensions:
- extension:
last-update: Tue, 24 Oct 2023 19:35:19 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-639553
name: pause-639553
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-830809
user:
client-certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/kubernetes-upgrade-830809/client.crt
client-key: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/kubernetes-upgrade-830809/client.key
- name: pause-639553
user:
client-certificate: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.crt
client-key: /home/jenkins/minikube-integration/17485-471553/.minikube/profiles/pause-639553/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-973203

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-973203" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-973203"

                                                
                                                
----------------------- debugLogs end: cilium-973203 [took: 5.777775934s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-973203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-973203
--- SKIP: TestNetworkPlugins/group/cilium (5.98s)

                                                
                                    
Copied to clipboard