Test Report: Docker_Linux_crio_arm64 17323

                    
                      c1ea47c43b7779cefdb242dbac2fab4b02ecdc60:2023-10-02:31265
                    
                

Test fail (8/299)

x
+
TestAddons/parallel/Ingress (171.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-598993 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-598993 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-598993 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1f862fe9-b206-4807-bbfe-8ac4299c0833] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1f862fe9-b206-4807-bbfe-8ac4299c0833] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.057022547s
addons_test.go:240: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-598993 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.495819989s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context addons-598993 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.063455285s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-arm64 -p addons-598993 addons disable ingress-dns --alsologtostderr -v=1: (1.116808918s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-arm64 -p addons-598993 addons disable ingress --alsologtostderr -v=1: (7.756291164s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-598993
helpers_test.go:235: (dbg) docker inspect addons-598993:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05b94dc1767d75f37310d1a9a17bb8af40a037571c7db7d1bd8581c899c27556",
	        "Created": "2023-10-02T21:23:30.386051542Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1048699,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T21:23:30.708536583Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/05b94dc1767d75f37310d1a9a17bb8af40a037571c7db7d1bd8581c899c27556/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05b94dc1767d75f37310d1a9a17bb8af40a037571c7db7d1bd8581c899c27556/hostname",
	        "HostsPath": "/var/lib/docker/containers/05b94dc1767d75f37310d1a9a17bb8af40a037571c7db7d1bd8581c899c27556/hosts",
	        "LogPath": "/var/lib/docker/containers/05b94dc1767d75f37310d1a9a17bb8af40a037571c7db7d1bd8581c899c27556/05b94dc1767d75f37310d1a9a17bb8af40a037571c7db7d1bd8581c899c27556-json.log",
	        "Name": "/addons-598993",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-598993:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-598993",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9c695f5e4700b60f72c7220224b48fe2ab9926537614ca4f18bd4ff5280b3256-init/diff:/var/lib/docker/overlay2/211b77e87812a1edc3010e11f8a4e888a425a4aebe773b65e967cb7beecedbef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c695f5e4700b60f72c7220224b48fe2ab9926537614ca4f18bd4ff5280b3256/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c695f5e4700b60f72c7220224b48fe2ab9926537614ca4f18bd4ff5280b3256/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c695f5e4700b60f72c7220224b48fe2ab9926537614ca4f18bd4ff5280b3256/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-598993",
	                "Source": "/var/lib/docker/volumes/addons-598993/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-598993",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-598993",
	                "name.minikube.sigs.k8s.io": "addons-598993",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5de80e2653e99e1b85a9a34d1ddc6c6fcaeb26e6d03586b6fde8bb5b8252c31a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33735"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33734"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33731"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33733"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33732"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5de80e2653e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-598993": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "05b94dc1767d",
	                        "addons-598993"
	                    ],
	                    "NetworkID": "f59e7cdce4493ba8a5b92b126883cf72dee9ac0341b5fc5ff8b8c315825e86da",
	                    "EndpointID": "680823ae3ecaf0d92c45465ad809e873dbbe95c916d3a4cd185a91090810c672",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-598993 -n addons-598993
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-598993 logs -n 25: (1.640896852s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-585498   | jenkins | v1.31.2 | 02 Oct 23 21:22 UTC |                     |
	|         | -p download-only-585498                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-585498   | jenkins | v1.31.2 | 02 Oct 23 21:22 UTC |                     |
	|         | -p download-only-585498                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC | 02 Oct 23 21:23 UTC |
	| delete  | -p download-only-585498                                                                     | download-only-585498   | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC | 02 Oct 23 21:23 UTC |
	| delete  | -p download-only-585498                                                                     | download-only-585498   | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC | 02 Oct 23 21:23 UTC |
	| start   | --download-only -p                                                                          | download-docker-380768 | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC |                     |
	|         | download-docker-380768                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-380768                                                                   | download-docker-380768 | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC | 02 Oct 23 21:23 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-870867   | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC |                     |
	|         | binary-mirror-870867                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46249                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-870867                                                                     | binary-mirror-870867   | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC | 02 Oct 23 21:23 UTC |
	| start   | -p addons-598993 --wait=true                                                                | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:23 UTC | 02 Oct 23 21:25 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-598993 ip                                                                            | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:26 UTC | 02 Oct 23 21:26 UTC |
	| addons  | addons-598993 addons disable                                                                | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:26 UTC | 02 Oct 23 21:26 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-598993 addons                                                                        | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:26 UTC | 02 Oct 23 21:26 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:26 UTC | 02 Oct 23 21:26 UTC |
	|         | addons-598993                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-598993 ssh curl -s                                                                   | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:26 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-598993 addons                                                                        | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:26 UTC | 02 Oct 23 21:27 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-598993 addons                                                                        | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:27 UTC | 02 Oct 23 21:27 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:27 UTC | 02 Oct 23 21:27 UTC |
	|         | addons-598993                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-598993 ssh cat                                                                       | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:27 UTC | 02 Oct 23 21:27 UTC |
	|         | /opt/local-path-provisioner/pvc-8a19ccc8-8ac4-441c-9b07-6dca426035a8_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-598993 addons disable                                                                | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:27 UTC | 02 Oct 23 21:27 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:27 UTC | 02 Oct 23 21:28 UTC |
	|         | -p addons-598993                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-598993 ip                                                                            | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:28 UTC | 02 Oct 23 21:28 UTC |
	| addons  | addons-598993 addons disable                                                                | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:28 UTC | 02 Oct 23 21:29 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-598993 addons disable                                                                | addons-598993          | jenkins | v1.31.2 | 02 Oct 23 21:29 UTC | 02 Oct 23 21:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 21:23:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:23:07.111569 1048225 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:23:07.111829 1048225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:23:07.111861 1048225 out.go:309] Setting ErrFile to fd 2...
	I1002 21:23:07.111891 1048225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:23:07.112208 1048225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:23:07.112780 1048225 out.go:303] Setting JSON to false
	I1002 21:23:07.114127 1048225 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14734,"bootTime":1696267053,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:23:07.114266 1048225 start.go:138] virtualization:  
	I1002 21:23:07.117475 1048225 out.go:177] * [addons-598993] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:23:07.119666 1048225 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 21:23:07.121833 1048225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:23:07.119820 1048225 notify.go:220] Checking for updates...
	I1002 21:23:07.123759 1048225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:23:07.125879 1048225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:23:07.128012 1048225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:23:07.130114 1048225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:23:07.132632 1048225 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:23:07.156297 1048225 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:23:07.156393 1048225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:23:07.244646 1048225 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-02 21:23:07.232715748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:23:07.244765 1048225 docker.go:294] overlay module found
	I1002 21:23:07.247273 1048225 out.go:177] * Using the docker driver based on user configuration
	I1002 21:23:07.249328 1048225 start.go:298] selected driver: docker
	I1002 21:23:07.249346 1048225 start.go:902] validating driver "docker" against <nil>
	I1002 21:23:07.249359 1048225 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:23:07.249988 1048225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:23:07.320412 1048225 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-02 21:23:07.310974772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:23:07.320557 1048225 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 21:23:07.320775 1048225 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:23:07.323112 1048225 out.go:177] * Using Docker driver with root privileges
	I1002 21:23:07.325175 1048225 cni.go:84] Creating CNI manager for ""
	I1002 21:23:07.325200 1048225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:23:07.325234 1048225 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:23:07.325251 1048225 start_flags.go:321] config:
	{Name:addons-598993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-598993 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:23:07.329334 1048225 out.go:177] * Starting control plane node addons-598993 in cluster addons-598993
	I1002 21:23:07.331506 1048225 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:23:07.333837 1048225 out.go:177] * Pulling base image ...
	I1002 21:23:07.335628 1048225 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:23:07.335684 1048225 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 21:23:07.335704 1048225 cache.go:57] Caching tarball of preloaded images
	I1002 21:23:07.335791 1048225 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 21:23:07.335803 1048225 preload.go:174] Found /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:23:07.335814 1048225 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 21:23:07.336158 1048225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/config.json ...
	I1002 21:23:07.336247 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/config.json: {Name:mkd34bc23fc5ee6ea8c6c06c703ba03446af8b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:07.352524 1048225 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 21:23:07.352653 1048225 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 21:23:07.352676 1048225 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I1002 21:23:07.352681 1048225 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I1002 21:23:07.352689 1048225 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 21:23:07.352698 1048225 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from local cache
	I1002 21:23:23.155029 1048225 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from cached tarball
	I1002 21:23:23.155069 1048225 cache.go:195] Successfully downloaded all kic artifacts
	I1002 21:23:23.155146 1048225 start.go:365] acquiring machines lock for addons-598993: {Name:mk8fd6552244bbda9982f8cc081aff717d528024 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:23:23.155269 1048225 start.go:369] acquired machines lock for "addons-598993" in 96.886µs
	I1002 21:23:23.155298 1048225 start.go:93] Provisioning new machine with config: &{Name:addons-598993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-598993 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:23:23.155395 1048225 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:23:23.157920 1048225 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1002 21:23:23.158228 1048225 start.go:159] libmachine.API.Create for "addons-598993" (driver="docker")
	I1002 21:23:23.158262 1048225 client.go:168] LocalClient.Create starting
	I1002 21:23:23.158374 1048225 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem
	I1002 21:23:23.633400 1048225 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem
	I1002 21:23:23.793056 1048225 cli_runner.go:164] Run: docker network inspect addons-598993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:23:23.811881 1048225 cli_runner.go:211] docker network inspect addons-598993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:23:23.811963 1048225 network_create.go:281] running [docker network inspect addons-598993] to gather additional debugging logs...
	I1002 21:23:23.811984 1048225 cli_runner.go:164] Run: docker network inspect addons-598993
	W1002 21:23:23.830358 1048225 cli_runner.go:211] docker network inspect addons-598993 returned with exit code 1
	I1002 21:23:23.830401 1048225 network_create.go:284] error running [docker network inspect addons-598993]: docker network inspect addons-598993: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-598993 not found
	I1002 21:23:23.830421 1048225 network_create.go:286] output of [docker network inspect addons-598993]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-598993 not found
	
	** /stderr **
	I1002 21:23:23.830533 1048225 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:23:23.847820 1048225 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011f47a0}
	I1002 21:23:23.847862 1048225 network_create.go:124] attempt to create docker network addons-598993 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:23:23.847924 1048225 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-598993 addons-598993
	I1002 21:23:23.918083 1048225 network_create.go:108] docker network addons-598993 192.168.49.0/24 created
	I1002 21:23:23.918116 1048225 kic.go:117] calculated static IP "192.168.49.2" for the "addons-598993" container
	I1002 21:23:23.918188 1048225 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:23:23.934886 1048225 cli_runner.go:164] Run: docker volume create addons-598993 --label name.minikube.sigs.k8s.io=addons-598993 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:23:23.953374 1048225 oci.go:103] Successfully created a docker volume addons-598993
	I1002 21:23:23.953468 1048225 cli_runner.go:164] Run: docker run --rm --name addons-598993-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598993 --entrypoint /usr/bin/test -v addons-598993:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 21:23:26.069899 1048225 cli_runner.go:217] Completed: docker run --rm --name addons-598993-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598993 --entrypoint /usr/bin/test -v addons-598993:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (2.116388581s)
	I1002 21:23:26.069930 1048225 oci.go:107] Successfully prepared a docker volume addons-598993
	I1002 21:23:26.069951 1048225 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:23:26.069971 1048225 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 21:23:26.070052 1048225 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-598993:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:23:30.297547 1048225 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-598993:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.227452752s)
	I1002 21:23:30.297579 1048225 kic.go:199] duration metric: took 4.227605 seconds to extract preloaded images to volume
	W1002 21:23:30.297717 1048225 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:23:30.297833 1048225 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:23:30.369903 1048225 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-598993 --name addons-598993 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-598993 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-598993 --network addons-598993 --ip 192.168.49.2 --volume addons-598993:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 21:23:30.717573 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Running}}
	I1002 21:23:30.749180 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:23:30.775381 1048225 cli_runner.go:164] Run: docker exec addons-598993 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:23:30.866065 1048225 oci.go:144] the created container "addons-598993" has a running status.
	I1002 21:23:30.866091 1048225 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa...
	I1002 21:23:32.130107 1048225 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:23:32.151819 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:23:32.169110 1048225 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:23:32.169131 1048225 kic_runner.go:114] Args: [docker exec --privileged addons-598993 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:23:32.241735 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:23:32.263934 1048225 machine.go:88] provisioning docker machine ...
	I1002 21:23:32.263982 1048225 ubuntu.go:169] provisioning hostname "addons-598993"
	I1002 21:23:32.264095 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:32.282524 1048225 main.go:141] libmachine: Using SSH client type: native
	I1002 21:23:32.282989 1048225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33735 <nil> <nil>}
	I1002 21:23:32.283012 1048225 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-598993 && echo "addons-598993" | sudo tee /etc/hostname
	I1002 21:23:32.437184 1048225 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-598993
	
	I1002 21:23:32.437287 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:32.456130 1048225 main.go:141] libmachine: Using SSH client type: native
	I1002 21:23:32.456542 1048225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33735 <nil> <nil>}
	I1002 21:23:32.456564 1048225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-598993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-598993/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-598993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:23:32.598544 1048225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:23:32.598619 1048225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 21:23:32.598653 1048225 ubuntu.go:177] setting up certificates
	I1002 21:23:32.598692 1048225 provision.go:83] configureAuth start
	I1002 21:23:32.598788 1048225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598993
	I1002 21:23:32.616963 1048225 provision.go:138] copyHostCerts
	I1002 21:23:32.617044 1048225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 21:23:32.617176 1048225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 21:23:32.617336 1048225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 21:23:32.617394 1048225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.addons-598993 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-598993]
	I1002 21:23:33.169743 1048225 provision.go:172] copyRemoteCerts
	I1002 21:23:33.169812 1048225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:23:33.169858 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:33.188575 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:23:33.288397 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:23:33.316666 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 21:23:33.345043 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:23:33.373790 1048225 provision.go:86] duration metric: configureAuth took 775.06782ms
	I1002 21:23:33.373816 1048225 ubuntu.go:193] setting minikube options for container-runtime
	I1002 21:23:33.374002 1048225 config.go:182] Loaded profile config "addons-598993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:23:33.374114 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:33.396834 1048225 main.go:141] libmachine: Using SSH client type: native
	I1002 21:23:33.397267 1048225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33735 <nil> <nil>}
	I1002 21:23:33.397290 1048225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:23:33.652037 1048225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:23:33.652062 1048225 machine.go:91] provisioned docker machine in 1.388090103s
	I1002 21:23:33.652072 1048225 client.go:171] LocalClient.Create took 10.493804291s
	I1002 21:23:33.652090 1048225 start.go:167] duration metric: libmachine.API.Create for "addons-598993" took 10.493862694s
	I1002 21:23:33.652096 1048225 start.go:300] post-start starting for "addons-598993" (driver="docker")
	I1002 21:23:33.652106 1048225 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:23:33.652178 1048225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:23:33.652225 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:33.675910 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:23:33.776251 1048225 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:23:33.780417 1048225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:23:33.780451 1048225 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 21:23:33.780464 1048225 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 21:23:33.780473 1048225 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 21:23:33.780483 1048225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 21:23:33.780552 1048225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 21:23:33.780574 1048225 start.go:303] post-start completed in 128.472168ms
	I1002 21:23:33.780894 1048225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598993
	I1002 21:23:33.798553 1048225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/config.json ...
	I1002 21:23:33.798845 1048225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:23:33.798907 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:33.816824 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:23:33.911431 1048225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:23:33.917399 1048225 start.go:128] duration metric: createHost completed in 10.761987079s
	I1002 21:23:33.917424 1048225 start.go:83] releasing machines lock for "addons-598993", held for 10.762141894s
	I1002 21:23:33.917496 1048225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-598993
	I1002 21:23:33.935957 1048225 ssh_runner.go:195] Run: cat /version.json
	I1002 21:23:33.936015 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:33.936033 1048225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:23:33.936093 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:23:33.955014 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:23:33.956829 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:23:34.050070 1048225 ssh_runner.go:195] Run: systemctl --version
	I1002 21:23:34.185036 1048225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:23:34.337980 1048225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 21:23:34.343707 1048225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:23:34.368249 1048225 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 21:23:34.368324 1048225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:23:34.408968 1048225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 21:23:34.408994 1048225 start.go:469] detecting cgroup driver to use...
	I1002 21:23:34.409027 1048225 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 21:23:34.409086 1048225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:23:34.429874 1048225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:23:34.443510 1048225 docker.go:197] disabling cri-docker service (if available) ...
	I1002 21:23:34.443598 1048225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:23:34.461342 1048225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:23:34.480076 1048225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:23:34.588879 1048225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:23:34.698964 1048225 docker.go:213] disabling docker service ...
	I1002 21:23:34.699068 1048225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:23:34.721630 1048225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:23:34.736481 1048225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:23:34.845584 1048225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:23:34.952537 1048225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:23:34.966979 1048225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:23:34.987924 1048225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 21:23:34.987991 1048225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:23:35.000373 1048225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:23:35.000540 1048225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:23:35.016314 1048225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:23:35.029663 1048225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:23:35.042700 1048225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:23:35.054385 1048225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:23:35.065230 1048225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:23:35.076491 1048225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:23:35.180990 1048225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:23:35.302158 1048225 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:23:35.302247 1048225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:23:35.307141 1048225 start.go:537] Will wait 60s for crictl version
	I1002 21:23:35.307209 1048225 ssh_runner.go:195] Run: which crictl
	I1002 21:23:35.311886 1048225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:23:35.360929 1048225 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 21:23:35.361038 1048225 ssh_runner.go:195] Run: crio --version
	I1002 21:23:35.405826 1048225 ssh_runner.go:195] Run: crio --version
	I1002 21:23:35.455092 1048225 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 21:23:35.457015 1048225 cli_runner.go:164] Run: docker network inspect addons-598993 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:23:35.474432 1048225 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:23:35.479121 1048225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:23:35.492521 1048225 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:23:35.492624 1048225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:23:35.557438 1048225 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 21:23:35.557463 1048225 crio.go:415] Images already preloaded, skipping extraction
	I1002 21:23:35.557518 1048225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:23:35.597986 1048225 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 21:23:35.598010 1048225 cache_images.go:84] Images are preloaded, skipping loading
	I1002 21:23:35.598085 1048225 ssh_runner.go:195] Run: crio config
	I1002 21:23:35.650484 1048225 cni.go:84] Creating CNI manager for ""
	I1002 21:23:35.650506 1048225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:23:35.650557 1048225 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 21:23:35.650583 1048225 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-598993 NodeName:addons-598993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:23:35.650729 1048225 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-598993"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:23:35.650801 1048225 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-598993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-598993 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 21:23:35.650877 1048225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 21:23:35.661975 1048225 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:23:35.662051 1048225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:23:35.672868 1048225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1002 21:23:35.695011 1048225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:23:35.717179 1048225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1002 21:23:35.738910 1048225 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:23:35.743355 1048225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:23:35.756692 1048225 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993 for IP: 192.168.49.2
	I1002 21:23:35.756722 1048225 certs.go:190] acquiring lock for shared ca certs: {Name:mk89a4b04b53a0a6e55cb9c88355018fadb8a1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:35.757323 1048225 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key
	I1002 21:23:36.233233 1048225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt ...
	I1002 21:23:36.233260 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt: {Name:mk7b49c62666ee7ad9b2a24186a1ce127783c232 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:36.233445 1048225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key ...
	I1002 21:23:36.233457 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key: {Name:mk88a121d4c529a5bc63480595245092f9dcf210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:36.233547 1048225 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key
	I1002 21:23:37.973386 1048225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt ...
	I1002 21:23:37.973428 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt: {Name:mk327329983f9f1916fa6276360494c68becd790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:37.973705 1048225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key ...
	I1002 21:23:37.973721 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key: {Name:mk1fd0838e12abb069a2e6e3602899e8ee38b8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:37.974353 1048225 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.key
	I1002 21:23:37.974373 1048225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt with IP's: []
	I1002 21:23:38.410438 1048225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt ...
	I1002 21:23:38.410474 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: {Name:mk18ad9ee8c9d42513cd8fff0c47d3be5116935a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:38.410706 1048225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.key ...
	I1002 21:23:38.410721 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.key: {Name:mk71c91674e26cb475f2f1572fcbc69e586321cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:38.411370 1048225 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.key.dd3b5fb2
	I1002 21:23:38.411394 1048225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 21:23:40.339001 1048225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.crt.dd3b5fb2 ...
	I1002 21:23:40.339041 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.crt.dd3b5fb2: {Name:mk227ba407de4809b639dde01ccf6a7d29eb8f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:40.339259 1048225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.key.dd3b5fb2 ...
	I1002 21:23:40.339272 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.key.dd3b5fb2: {Name:mk05455c50eac15746869d52a58e778609c4dab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:40.339379 1048225 certs.go:337] copying /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.crt
	I1002 21:23:40.339461 1048225 certs.go:341] copying /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.key
	I1002 21:23:40.339531 1048225 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.key
	I1002 21:23:40.339551 1048225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.crt with IP's: []
	I1002 21:23:40.493996 1048225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.crt ...
	I1002 21:23:40.494025 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.crt: {Name:mkcb91ecaf24f0763a9637820ee3e0342e4f1a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:40.494216 1048225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.key ...
	I1002 21:23:40.494229 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.key: {Name:mk3ae3027ab1e84ada7835ef92ccfcf6916fc915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:23:40.494832 1048225 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:23:40.494920 1048225 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:23:40.494952 1048225 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:23:40.494982 1048225 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem (1679 bytes)
	I1002 21:23:40.495569 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 21:23:40.524045 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:23:40.553015 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:23:40.581990 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:23:40.610633 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:23:40.639722 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:23:40.668280 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:23:40.697960 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:23:40.726967 1048225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:23:40.757000 1048225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:23:40.778979 1048225 ssh_runner.go:195] Run: openssl version
	I1002 21:23:40.786309 1048225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:23:40.798176 1048225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:23:40.802820 1048225 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:23:40.802966 1048225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:23:40.811591 1048225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:23:40.823380 1048225 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 21:23:40.827796 1048225 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 21:23:40.827843 1048225 kubeadm.go:404] StartCluster: {Name:addons-598993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-598993 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:23:40.827920 1048225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:23:40.827974 1048225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:23:40.871198 1048225 cri.go:89] found id: ""
	I1002 21:23:40.871268 1048225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:23:40.882072 1048225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:23:40.893036 1048225 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:23:40.893122 1048225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:23:40.904013 1048225 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:23:40.904057 1048225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:23:40.959382 1048225 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 21:23:40.959797 1048225 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 21:23:41.007424 1048225 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:23:41.007514 1048225 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 21:23:41.007570 1048225 kubeadm.go:322] OS: Linux
	I1002 21:23:41.007637 1048225 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 21:23:41.007701 1048225 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 21:23:41.007766 1048225 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 21:23:41.007830 1048225 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 21:23:41.007897 1048225 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 21:23:41.007963 1048225 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 21:23:41.008026 1048225 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1002 21:23:41.008090 1048225 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1002 21:23:41.008149 1048225 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1002 21:23:41.092714 1048225 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:23:41.092840 1048225 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:23:41.092972 1048225 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 21:23:41.350288 1048225 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:23:41.353459 1048225 out.go:204]   - Generating certificates and keys ...
	I1002 21:23:41.353636 1048225 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 21:23:41.353756 1048225 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 21:23:42.039241 1048225 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:23:42.693567 1048225 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:23:43.146669 1048225 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:23:43.387675 1048225 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 21:23:44.547171 1048225 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 21:23:44.547388 1048225 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-598993 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:23:44.733137 1048225 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 21:23:44.733319 1048225 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-598993 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:23:46.139381 1048225 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:23:46.358379 1048225 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:23:46.589986 1048225 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 21:23:46.590445 1048225 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:23:46.893379 1048225 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:23:47.394079 1048225 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:23:47.996219 1048225 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:23:48.328715 1048225 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:23:48.329694 1048225 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:23:48.332744 1048225 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:23:48.335927 1048225 out.go:204]   - Booting up control plane ...
	I1002 21:23:48.336072 1048225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:23:48.336151 1048225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:23:48.336504 1048225 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:23:48.347735 1048225 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:23:48.348807 1048225 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:23:48.348973 1048225 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 21:23:48.452559 1048225 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 21:23:55.955604 1048225 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503402 seconds
	I1002 21:23:55.955731 1048225 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:23:55.970510 1048225 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:23:56.497239 1048225 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:23:56.497425 1048225 kubeadm.go:322] [mark-control-plane] Marking the node addons-598993 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:23:57.011952 1048225 kubeadm.go:322] [bootstrap-token] Using token: qcqwjx.74mn3mj7bst9xh8u
	I1002 21:23:57.013973 1048225 out.go:204]   - Configuring RBAC rules ...
	I1002 21:23:57.014103 1048225 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:23:57.021462 1048225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:23:57.030531 1048225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:23:57.036343 1048225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:23:57.040982 1048225 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:23:57.046060 1048225 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:23:57.064808 1048225 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:23:57.297382 1048225 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 21:23:57.429580 1048225 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 21:23:57.429597 1048225 kubeadm.go:322] 
	I1002 21:23:57.429654 1048225 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 21:23:57.429658 1048225 kubeadm.go:322] 
	I1002 21:23:57.429730 1048225 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 21:23:57.429734 1048225 kubeadm.go:322] 
	I1002 21:23:57.429758 1048225 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 21:23:57.429814 1048225 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:23:57.429861 1048225 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:23:57.429866 1048225 kubeadm.go:322] 
	I1002 21:23:57.429916 1048225 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 21:23:57.429921 1048225 kubeadm.go:322] 
	I1002 21:23:57.429966 1048225 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:23:57.429970 1048225 kubeadm.go:322] 
	I1002 21:23:57.430019 1048225 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 21:23:57.430089 1048225 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:23:57.430153 1048225 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:23:57.430158 1048225 kubeadm.go:322] 
	I1002 21:23:57.430236 1048225 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:23:57.430308 1048225 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 21:23:57.430313 1048225 kubeadm.go:322] 
	I1002 21:23:57.430391 1048225 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qcqwjx.74mn3mj7bst9xh8u \
	I1002 21:23:57.430487 1048225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 \
	I1002 21:23:57.430506 1048225 kubeadm.go:322] 	--control-plane 
	I1002 21:23:57.430511 1048225 kubeadm.go:322] 
	I1002 21:23:57.430590 1048225 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:23:57.430595 1048225 kubeadm.go:322] 
	I1002 21:23:57.430676 1048225 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qcqwjx.74mn3mj7bst9xh8u \
	I1002 21:23:57.430771 1048225 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 
	I1002 21:23:57.432715 1048225 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 21:23:57.432825 1048225 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:23:57.432838 1048225 cni.go:84] Creating CNI manager for ""
	I1002 21:23:57.432846 1048225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:23:57.435280 1048225 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 21:23:57.437578 1048225 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:23:57.448749 1048225 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 21:23:57.448773 1048225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 21:23:57.503760 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:23:58.428013 1048225 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:23:58.428153 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:23:58.428238 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86 minikube.k8s.io/name=addons-598993 minikube.k8s.io/updated_at=2023_10_02T21_23_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:23:58.576497 1048225 ops.go:34] apiserver oom_adj: -16
	I1002 21:23:58.576589 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:23:58.699005 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:23:59.297461 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:23:59.797542 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:00.297740 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:00.797629 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:01.296881 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:01.797239 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:02.297303 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:02.797185 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:03.297225 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:03.797678 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:04.297373 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:04.797225 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:05.297152 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:05.797281 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:06.296900 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:06.797404 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:07.296857 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:07.796924 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:08.297061 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:08.796927 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:09.297156 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:09.797173 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:10.297458 1048225 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:24:10.434136 1048225 kubeadm.go:1081] duration metric: took 12.006026349s to wait for elevateKubeSystemPrivileges.
	I1002 21:24:10.434161 1048225 kubeadm.go:406] StartCluster complete in 29.606322152s
	I1002 21:24:10.434177 1048225 settings.go:142] acquiring lock: {Name:mk84ed9b341869374b10cf082af1bfa542d39dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:24:10.434291 1048225 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:24:10.434718 1048225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/kubeconfig: {Name:mk6186c13a5b804fd6de8f5697b568acedb59886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:24:10.436901 1048225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:24:10.437159 1048225 config.go:182] Loaded profile config "addons-598993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:24:10.437187 1048225 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1002 21:24:10.437823 1048225 addons.go:69] Setting gcp-auth=true in profile "addons-598993"
	I1002 21:24:10.437842 1048225 mustload.go:65] Loading cluster: addons-598993
	I1002 21:24:10.438000 1048225 config.go:182] Loaded profile config "addons-598993": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:24:10.438282 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.438473 1048225 addons.go:69] Setting volumesnapshots=true in profile "addons-598993"
	I1002 21:24:10.438525 1048225 addons.go:231] Setting addon volumesnapshots=true in "addons-598993"
	I1002 21:24:10.438600 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.439100 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.439497 1048225 addons.go:69] Setting ingress=true in profile "addons-598993"
	I1002 21:24:10.439513 1048225 addons.go:231] Setting addon ingress=true in "addons-598993"
	I1002 21:24:10.439565 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.439942 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.440099 1048225 addons.go:69] Setting cloud-spanner=true in profile "addons-598993"
	I1002 21:24:10.440115 1048225 addons.go:231] Setting addon cloud-spanner=true in "addons-598993"
	I1002 21:24:10.440169 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.440562 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.442972 1048225 addons.go:69] Setting ingress-dns=true in profile "addons-598993"
	I1002 21:24:10.442998 1048225 addons.go:231] Setting addon ingress-dns=true in "addons-598993"
	I1002 21:24:10.443058 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.443491 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.447868 1048225 addons.go:69] Setting inspektor-gadget=true in profile "addons-598993"
	I1002 21:24:10.447903 1048225 addons.go:231] Setting addon inspektor-gadget=true in "addons-598993"
	I1002 21:24:10.447958 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.448405 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.448712 1048225 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-598993"
	I1002 21:24:10.448762 1048225 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-598993"
	I1002 21:24:10.448806 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.449195 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.461307 1048225 addons.go:69] Setting default-storageclass=true in profile "addons-598993"
	I1002 21:24:10.461341 1048225 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-598993"
	I1002 21:24:10.461686 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.473446 1048225 addons.go:69] Setting metrics-server=true in profile "addons-598993"
	I1002 21:24:10.473476 1048225 addons.go:231] Setting addon metrics-server=true in "addons-598993"
	I1002 21:24:10.473528 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.473978 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.489394 1048225 addons.go:69] Setting registry=true in profile "addons-598993"
	I1002 21:24:10.489441 1048225 addons.go:231] Setting addon registry=true in "addons-598993"
	I1002 21:24:10.489504 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.489975 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.518347 1048225 addons.go:69] Setting storage-provisioner=true in profile "addons-598993"
	I1002 21:24:10.518378 1048225 addons.go:231] Setting addon storage-provisioner=true in "addons-598993"
	I1002 21:24:10.518430 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.518924 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.542373 1048225 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-598993"
	I1002 21:24:10.542402 1048225 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-598993"
	I1002 21:24:10.542765 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.654350 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.687222 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 21:24:10.689460 1048225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 21:24:10.689491 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 21:24:10.689573 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.713060 1048225 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1002 21:24:10.715086 1048225 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1002 21:24:10.717475 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1002 21:24:10.717497 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1002 21:24:10.717565 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.722387 1048225 addons.go:231] Setting addon default-storageclass=true in "addons-598993"
	I1002 21:24:10.722482 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.723060 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.715375 1048225 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 21:24:10.733226 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1002 21:24:10.733301 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.715383 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 21:24:10.740785 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 21:24:10.743443 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 21:24:10.745613 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 21:24:10.715388 1048225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.0
	I1002 21:24:10.715392 1048225 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1002 21:24:10.749990 1048225 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1002 21:24:10.748322 1048225 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-598993"
	I1002 21:24:10.751818 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:10.752323 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:10.752482 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 21:24:10.754401 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 21:24:10.752687 1048225 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 21:24:10.752796 1048225 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1002 21:24:10.757042 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 21:24:10.759975 1048225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 21:24:10.759990 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 21:24:10.765353 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.765939 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.785527 1048225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 21:24:10.789078 1048225 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 21:24:10.789103 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1002 21:24:10.789177 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.783056 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 21:24:10.811783 1048225 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 21:24:10.810769 1048225 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-598993" context rescaled to 1 replicas
	I1002 21:24:10.811315 1048225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:24:10.816619 1048225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 21:24:10.816652 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 21:24:10.816745 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.832484 1048225 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:24:10.834430 1048225 out.go:177] * Verifying Kubernetes components...
	I1002 21:24:10.837564 1048225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:24:10.839969 1048225 out.go:177]   - Using image docker.io/registry:2.8.1
	I1002 21:24:10.841912 1048225 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1002 21:24:10.848054 1048225 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 21:24:10.848088 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1002 21:24:10.848158 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.848669 1048225 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:24:10.858840 1048225 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:24:10.858917 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:24:10.859022 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.894679 1048225 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:24:10.894699 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:24:10.894760 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:10.897596 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:10.923325 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:10.934157 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:10.981230 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:10.983842 1048225 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 21:24:10.986366 1048225 out.go:177]   - Using image docker.io/busybox:stable
	I1002 21:24:10.988396 1048225 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 21:24:10.988420 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 21:24:10.988486 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:11.014487 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:11.032838 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:11.050640 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:11.056648 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:11.061310 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:11.081323 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:11.113542 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	W1002 21:24:11.116646 1048225 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 21:24:11.116678 1048225 retry.go:31] will retry after 180.961219ms: ssh: handshake failed: EOF
	I1002 21:24:11.304731 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1002 21:24:11.304755 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1002 21:24:11.388102 1048225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 21:24:11.388163 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 21:24:11.413047 1048225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 21:24:11.413072 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 21:24:11.456460 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:24:11.458581 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 21:24:11.468495 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 21:24:11.477333 1048225 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 21:24:11.477359 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 21:24:11.480769 1048225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 21:24:11.480794 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 21:24:11.550481 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1002 21:24:11.550508 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1002 21:24:11.554064 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 21:24:11.571406 1048225 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 21:24:11.571433 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 21:24:11.587862 1048225 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 21:24:11.587892 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 21:24:11.592596 1048225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 21:24:11.592625 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 21:24:11.595129 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 21:24:11.611168 1048225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 21:24:11.611193 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 21:24:11.626729 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:24:11.720147 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1002 21:24:11.720175 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1002 21:24:11.742656 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 21:24:11.743271 1048225 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 21:24:11.743292 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 21:24:11.763578 1048225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 21:24:11.763604 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 21:24:11.775616 1048225 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 21:24:11.775650 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 21:24:11.877590 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1002 21:24:11.877616 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1002 21:24:11.928430 1048225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 21:24:11.928464 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 21:24:12.014775 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 21:24:12.030407 1048225 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 21:24:12.030438 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 21:24:12.138203 1048225 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 21:24:12.138228 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 21:24:12.149177 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1002 21:24:12.149225 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1002 21:24:12.281682 1048225 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:24:12.281706 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 21:24:12.339468 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 21:24:12.339494 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1002 21:24:12.365731 1048225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 21:24:12.365763 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1002 21:24:12.405002 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:24:12.453833 1048225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 21:24:12.453862 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 21:24:12.507666 1048225 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 21:24:12.507693 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1002 21:24:12.551548 1048225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 21:24:12.551574 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 21:24:12.606701 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1002 21:24:12.666900 1048225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 21:24:12.666935 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 21:24:12.776841 1048225 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 21:24:12.776875 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 21:24:12.796182 1048225 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.980117552s)
	I1002 21:24:12.796226 1048225 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 21:24:12.796258 1048225 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.958674421s)
	I1002 21:24:12.797456 1048225 node_ready.go:35] waiting up to 6m0s for node "addons-598993" to be "Ready" ...
	I1002 21:24:12.885163 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 21:24:14.492128 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.035626372s)
	I1002 21:24:15.267346 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:15.408125 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.949506007s)
	I1002 21:24:16.614900 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.146358376s)
	I1002 21:24:16.614933 1048225 addons.go:467] Verifying addon ingress=true in "addons-598993"
	I1002 21:24:16.618125 1048225 out.go:177] * Verifying ingress addon...
	I1002 21:24:16.615208 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.061055619s)
	I1002 21:24:16.615239 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.020084796s)
	I1002 21:24:16.615279 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.988524758s)
	I1002 21:24:16.615348 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.872662581s)
	I1002 21:24:16.615409 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.600593643s)
	I1002 21:24:16.615497 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.210465631s)
	I1002 21:24:16.615545 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.008813529s)
	I1002 21:24:16.618575 1048225 addons.go:467] Verifying addon registry=true in "addons-598993"
	I1002 21:24:16.618586 1048225 addons.go:467] Verifying addon metrics-server=true in "addons-598993"
	W1002 21:24:16.618607 1048225 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 21:24:16.622399 1048225 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 21:24:16.624932 1048225 out.go:177] * Verifying registry addon...
	I1002 21:24:16.628330 1048225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 21:24:16.625078 1048225 retry.go:31] will retry after 200.121473ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 21:24:16.637121 1048225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 21:24:16.637190 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:16.653150 1048225 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 21:24:16.653262 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:16.664785 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:16.673660 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:16.829032 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 21:24:17.063642 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.178344794s)
	I1002 21:24:17.063728 1048225 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-598993"
	I1002 21:24:17.066160 1048225 out.go:177] * Verifying csi-hostpath-driver addon...
	I1002 21:24:17.082094 1048225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 21:24:17.134013 1048225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 21:24:17.134037 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:17.149029 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:17.175942 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:17.184530 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:17.619097 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:17.654630 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:17.671849 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:17.679078 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:17.707477 1048225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 21:24:17.707558 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:17.753741 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:18.017254 1048225 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 21:24:18.068023 1048225 addons.go:231] Setting addon gcp-auth=true in "addons-598993"
	I1002 21:24:18.068078 1048225 host.go:66] Checking if "addons-598993" exists ...
	I1002 21:24:18.068552 1048225 cli_runner.go:164] Run: docker container inspect addons-598993 --format={{.State.Status}}
	I1002 21:24:18.121418 1048225 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 21:24:18.121472 1048225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-598993
	I1002 21:24:18.166491 1048225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33735 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/addons-598993/id_rsa Username:docker}
	I1002 21:24:18.190434 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:18.221755 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:18.222043 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:18.527270 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.698127977s)
	I1002 21:24:18.531637 1048225 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1002 21:24:18.533921 1048225 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1002 21:24:18.536122 1048225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 21:24:18.536177 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 21:24:18.600984 1048225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 21:24:18.601056 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 21:24:18.654591 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:18.663665 1048225 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 21:24:18.663723 1048225 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1002 21:24:18.670582 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:18.678713 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:18.724976 1048225 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 21:24:19.154333 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:19.169479 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:19.178924 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:19.620768 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:19.655004 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:19.669880 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:19.678728 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:19.972810 1048225 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.247747033s)
	I1002 21:24:19.975594 1048225 addons.go:467] Verifying addon gcp-auth=true in "addons-598993"
	I1002 21:24:19.980027 1048225 out.go:177] * Verifying gcp-auth addon...
	I1002 21:24:19.983251 1048225 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 21:24:20.070320 1048225 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 21:24:20.070392 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:20.106368 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:20.174654 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:20.175399 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:20.184915 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:20.610394 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:20.655803 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:20.674341 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:20.685632 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:21.111828 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:21.154708 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:21.170740 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:21.184380 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:21.610902 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:21.654755 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:21.671919 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:21.681525 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:22.111597 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:22.123095 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:22.155697 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:22.173941 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:22.180490 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:22.610916 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:22.655509 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:22.670502 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:22.682769 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:23.112607 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:23.156978 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:23.172050 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:23.179118 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:23.611551 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:23.654049 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:23.669581 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:23.678256 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:24.109925 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:24.154738 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:24.169250 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:24.178205 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:24.610043 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:24.619468 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:24.653587 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:24.669047 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:24.677628 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:25.110878 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:25.153913 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:25.169792 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:25.178735 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:25.610598 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:25.653309 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:25.669760 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:25.678360 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:26.109988 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:26.154134 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:26.169744 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:26.178559 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:26.610437 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:26.620837 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:26.655426 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:26.669878 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:26.679415 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:27.110375 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:27.153948 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:27.169698 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:27.177804 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:27.610568 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:27.654101 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:27.669074 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:27.677861 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:28.111135 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:28.153474 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:28.169121 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:28.177785 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:28.610784 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:28.653061 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:28.669141 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:28.678116 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:29.110279 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:29.119025 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:29.153596 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:29.169530 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:29.178611 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:29.610699 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:29.655129 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:29.670272 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:29.678967 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:30.111408 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:30.155539 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:30.170125 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:30.178222 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:30.610790 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:30.655112 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:30.669057 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:30.678262 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:31.110272 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:31.119096 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:31.153443 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:31.169527 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:31.178574 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:31.611845 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:31.654407 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:31.669397 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:31.678961 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:32.110097 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:32.153946 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:32.169547 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:32.178832 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:32.611036 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:32.654038 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:32.669660 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:32.678434 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:33.110681 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:33.120052 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:33.153995 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:33.169578 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:33.178577 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:33.610663 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:33.653779 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:33.669459 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:33.679789 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:34.110511 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:34.153317 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:34.169821 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:34.178519 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:34.610896 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:34.654015 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:34.669803 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:34.678335 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:35.110020 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:35.120168 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:35.155168 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:35.169936 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:35.178850 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:35.611227 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:35.653695 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:35.670619 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:35.678236 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:36.110581 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:36.154085 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:36.169274 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:36.177930 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:36.612024 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:36.654556 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:36.669810 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:36.680902 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:37.110370 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:37.154202 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:37.169588 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:37.178656 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:37.610783 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:37.619820 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:37.654376 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:37.669036 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:37.678882 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:38.110753 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:38.153618 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:38.169868 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:38.178934 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:38.610020 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:38.653662 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:38.670009 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:38.677629 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:39.110569 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:39.153504 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:39.169296 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:39.178451 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:39.610752 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:39.654110 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:39.669839 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:39.678387 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:40.110483 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:40.119877 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:40.154528 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:40.170064 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:40.178056 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:40.610395 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:40.654119 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:40.670568 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:40.678346 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:41.110473 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:41.153670 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:41.169591 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:41.179771 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:41.611093 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:41.654802 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:41.668854 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:41.678469 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:42.111095 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:42.153781 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:42.169644 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:42.178751 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:42.611557 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:42.619382 1048225 node_ready.go:58] node "addons-598993" has status "Ready":"False"
	I1002 21:24:42.654277 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:42.669641 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:42.678321 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:43.110133 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:43.153593 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:43.169022 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:43.178845 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:43.611099 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:43.653651 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:43.669531 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:43.679656 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:44.155401 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:44.162230 1048225 node_ready.go:49] node "addons-598993" has status "Ready":"True"
	I1002 21:24:44.162257 1048225 node_ready.go:38] duration metric: took 31.364773334s waiting for node "addons-598993" to be "Ready" ...
	I1002 21:24:44.162268 1048225 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:24:44.213345 1048225 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 21:24:44.213372 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:44.218778 1048225 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gzc5v" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:44.231588 1048225 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 21:24:44.231614 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:44.303026 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:44.627648 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:44.691206 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:44.698660 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:44.699872 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:45.121313 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:45.155816 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:45.171837 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:45.179797 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:45.431938 1048225 pod_ready.go:92] pod "coredns-5dd5756b68-gzc5v" in "kube-system" namespace has status "Ready":"True"
	I1002 21:24:45.431960 1048225 pod_ready.go:81] duration metric: took 1.213147985s waiting for pod "coredns-5dd5756b68-gzc5v" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.431984 1048225 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.439991 1048225 pod_ready.go:92] pod "etcd-addons-598993" in "kube-system" namespace has status "Ready":"True"
	I1002 21:24:45.440021 1048225 pod_ready.go:81] duration metric: took 8.028902ms waiting for pod "etcd-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.440037 1048225 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.446255 1048225 pod_ready.go:92] pod "kube-apiserver-addons-598993" in "kube-system" namespace has status "Ready":"True"
	I1002 21:24:45.446284 1048225 pod_ready.go:81] duration metric: took 6.238026ms waiting for pod "kube-apiserver-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.446317 1048225 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.452511 1048225 pod_ready.go:92] pod "kube-controller-manager-addons-598993" in "kube-system" namespace has status "Ready":"True"
	I1002 21:24:45.452533 1048225 pod_ready.go:81] duration metric: took 6.201915ms waiting for pod "kube-controller-manager-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.452550 1048225 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2xsp" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.611043 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:45.655253 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:45.670465 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:45.682530 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:45.720434 1048225 pod_ready.go:92] pod "kube-proxy-z2xsp" in "kube-system" namespace has status "Ready":"True"
	I1002 21:24:45.720467 1048225 pod_ready.go:81] duration metric: took 267.909077ms waiting for pod "kube-proxy-z2xsp" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:45.720480 1048225 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:46.110801 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:46.120225 1048225 pod_ready.go:92] pod "kube-scheduler-addons-598993" in "kube-system" namespace has status "Ready":"True"
	I1002 21:24:46.120250 1048225 pod_ready.go:81] duration metric: took 399.762002ms waiting for pod "kube-scheduler-addons-598993" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:46.120262 1048225 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-qq7vr" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:46.156960 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:46.170638 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:46.179020 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:46.612402 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:46.660059 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:46.672574 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:46.687210 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:47.111869 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:47.155165 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:47.170327 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:47.180349 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:47.611165 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:47.673493 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:47.696675 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:47.697573 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:48.111263 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:48.156614 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:48.171552 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:48.183543 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:48.429785 1048225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-qq7vr" in "kube-system" namespace has status "Ready":"False"
	I1002 21:24:48.642514 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:48.696210 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:48.705213 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:48.710781 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:49.121909 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:49.169957 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:49.187941 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:49.189725 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:49.612075 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:49.656561 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:49.670503 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:49.683750 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:50.110345 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:50.156880 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:50.173292 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:50.183870 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:50.435979 1048225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-qq7vr" in "kube-system" namespace has status "Ready":"False"
	I1002 21:24:50.611375 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:50.668663 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:50.674546 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:50.678135 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:51.119288 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:51.196240 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:51.204110 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:51.212886 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:51.610967 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:51.677538 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:51.704658 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:51.714012 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:52.110496 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:52.161339 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:52.170871 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:52.178989 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:52.611956 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:52.656534 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:52.670511 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:52.679423 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:52.928620 1048225 pod_ready.go:102] pod "metrics-server-7c66d45ddc-qq7vr" in "kube-system" namespace has status "Ready":"False"
	I1002 21:24:53.115455 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:53.159858 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:53.173153 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:53.179276 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:53.610541 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:53.669470 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:53.687746 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:53.692916 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:54.111728 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:54.156134 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:54.170573 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:54.182293 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:54.427566 1048225 pod_ready.go:92] pod "metrics-server-7c66d45ddc-qq7vr" in "kube-system" namespace has status "Ready":"True"
	I1002 21:24:54.427590 1048225 pod_ready.go:81] duration metric: took 8.307321253s waiting for pod "metrics-server-7c66d45ddc-qq7vr" in "kube-system" namespace to be "Ready" ...
	I1002 21:24:54.427613 1048225 pod_ready.go:38] duration metric: took 10.265325256s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:24:54.427628 1048225 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:24:54.427688 1048225 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:24:54.442391 1048225 api_server.go:72] duration metric: took 43.609862284s to wait for apiserver process to appear ...
	I1002 21:24:54.442413 1048225 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:24:54.442429 1048225 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 21:24:54.452146 1048225 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 21:24:54.453586 1048225 api_server.go:141] control plane version: v1.28.2
	I1002 21:24:54.453614 1048225 api_server.go:131] duration metric: took 11.193908ms to wait for apiserver health ...
	I1002 21:24:54.453623 1048225 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:24:54.463809 1048225 system_pods.go:59] 17 kube-system pods found
	I1002 21:24:54.463852 1048225 system_pods.go:61] "coredns-5dd5756b68-gzc5v" [8bd76a59-a655-4bec-8d8c-69d9bc919d68] Running
	I1002 21:24:54.463863 1048225 system_pods.go:61] "csi-hostpath-attacher-0" [196183ff-fea9-4374-aeb2-18693861cd42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:24:54.463904 1048225 system_pods.go:61] "csi-hostpath-resizer-0" [a7f0c16c-7123-4bea-b0ef-4bebcca72190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:24:54.463914 1048225 system_pods.go:61] "csi-hostpathplugin-zzclk" [56a510d9-093f-488b-8ec4-d9946328acf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:24:54.463924 1048225 system_pods.go:61] "etcd-addons-598993" [030d9a25-a817-4c99-80fa-058191d1194a] Running
	I1002 21:24:54.463930 1048225 system_pods.go:61] "kindnet-578ms" [d7f41306-ffb5-448a-b806-c26fb29d9ef0] Running
	I1002 21:24:54.463939 1048225 system_pods.go:61] "kube-apiserver-addons-598993" [395135a4-3d82-425e-a1b6-fd8c26af2528] Running
	I1002 21:24:54.463944 1048225 system_pods.go:61] "kube-controller-manager-addons-598993" [899b9729-0f54-4a9c-87cc-7982e85372d8] Running
	I1002 21:24:54.463963 1048225 system_pods.go:61] "kube-ingress-dns-minikube" [9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:24:54.463975 1048225 system_pods.go:61] "kube-proxy-z2xsp" [eb526ac0-d532-4043-bcaf-4ae5ccf1caf2] Running
	I1002 21:24:54.463981 1048225 system_pods.go:61] "kube-scheduler-addons-598993" [778b32b3-e262-4305-aada-1343533469a7] Running
	I1002 21:24:54.463988 1048225 system_pods.go:61] "metrics-server-7c66d45ddc-qq7vr" [4bbba458-aca3-43cb-9507-4d820720e1d6] Running
	I1002 21:24:54.464000 1048225 system_pods.go:61] "registry-84c9d" [9b98d40f-1c78-4339-97e4-d24d9682a23f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:24:54.464008 1048225 system_pods.go:61] "registry-proxy-7jxhh" [f095db8c-4c24-442b-809a-c0488c3579ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:24:54.464020 1048225 system_pods.go:61] "snapshot-controller-58dbcc7b99-kmfml" [99dbf680-0b9a-4f85-9531-ec05f8bd1493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:24:54.464028 1048225 system_pods.go:61] "snapshot-controller-58dbcc7b99-xv5n6" [7f371f05-54f9-4270-a138-1f8c0ade1365] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:24:54.464037 1048225 system_pods.go:61] "storage-provisioner" [efb2e651-b3bd-44e9-95a5-f893fdbe1a68] Running
	I1002 21:24:54.464043 1048225 system_pods.go:74] duration metric: took 10.41385ms to wait for pod list to return data ...
	I1002 21:24:54.464057 1048225 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:24:54.466699 1048225 default_sa.go:45] found service account: "default"
	I1002 21:24:54.466726 1048225 default_sa.go:55] duration metric: took 2.662042ms for default service account to be created ...
	I1002 21:24:54.466736 1048225 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:24:54.476873 1048225 system_pods.go:86] 17 kube-system pods found
	I1002 21:24:54.476913 1048225 system_pods.go:89] "coredns-5dd5756b68-gzc5v" [8bd76a59-a655-4bec-8d8c-69d9bc919d68] Running
	I1002 21:24:54.476924 1048225 system_pods.go:89] "csi-hostpath-attacher-0" [196183ff-fea9-4374-aeb2-18693861cd42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 21:24:54.476933 1048225 system_pods.go:89] "csi-hostpath-resizer-0" [a7f0c16c-7123-4bea-b0ef-4bebcca72190] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 21:24:54.476942 1048225 system_pods.go:89] "csi-hostpathplugin-zzclk" [56a510d9-093f-488b-8ec4-d9946328acf0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 21:24:54.476958 1048225 system_pods.go:89] "etcd-addons-598993" [030d9a25-a817-4c99-80fa-058191d1194a] Running
	I1002 21:24:54.476964 1048225 system_pods.go:89] "kindnet-578ms" [d7f41306-ffb5-448a-b806-c26fb29d9ef0] Running
	I1002 21:24:54.476970 1048225 system_pods.go:89] "kube-apiserver-addons-598993" [395135a4-3d82-425e-a1b6-fd8c26af2528] Running
	I1002 21:24:54.476976 1048225 system_pods.go:89] "kube-controller-manager-addons-598993" [899b9729-0f54-4a9c-87cc-7982e85372d8] Running
	I1002 21:24:54.476984 1048225 system_pods.go:89] "kube-ingress-dns-minikube" [9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 21:24:54.476989 1048225 system_pods.go:89] "kube-proxy-z2xsp" [eb526ac0-d532-4043-bcaf-4ae5ccf1caf2] Running
	I1002 21:24:54.476995 1048225 system_pods.go:89] "kube-scheduler-addons-598993" [778b32b3-e262-4305-aada-1343533469a7] Running
	I1002 21:24:54.477006 1048225 system_pods.go:89] "metrics-server-7c66d45ddc-qq7vr" [4bbba458-aca3-43cb-9507-4d820720e1d6] Running
	I1002 21:24:54.477013 1048225 system_pods.go:89] "registry-84c9d" [9b98d40f-1c78-4339-97e4-d24d9682a23f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 21:24:54.477020 1048225 system_pods.go:89] "registry-proxy-7jxhh" [f095db8c-4c24-442b-809a-c0488c3579ca] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 21:24:54.477028 1048225 system_pods.go:89] "snapshot-controller-58dbcc7b99-kmfml" [99dbf680-0b9a-4f85-9531-ec05f8bd1493] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:24:54.477040 1048225 system_pods.go:89] "snapshot-controller-58dbcc7b99-xv5n6" [7f371f05-54f9-4270-a138-1f8c0ade1365] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 21:24:54.477045 1048225 system_pods.go:89] "storage-provisioner" [efb2e651-b3bd-44e9-95a5-f893fdbe1a68] Running
	I1002 21:24:54.477052 1048225 system_pods.go:126] duration metric: took 10.310425ms to wait for k8s-apps to be running ...
	I1002 21:24:54.477060 1048225 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:24:54.477122 1048225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:24:54.492055 1048225 system_svc.go:56] duration metric: took 14.985258ms WaitForService to wait for kubelet.
	I1002 21:24:54.492127 1048225 kubeadm.go:581] duration metric: took 43.659603752s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 21:24:54.492184 1048225 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:24:54.495662 1048225 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:24:54.495697 1048225 node_conditions.go:123] node cpu capacity is 2
	I1002 21:24:54.495710 1048225 node_conditions.go:105] duration metric: took 3.508939ms to run NodePressure ...
	I1002 21:24:54.495720 1048225 start.go:228] waiting for startup goroutines ...
	I1002 21:24:54.610519 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:54.655228 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:54.669625 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:54.678098 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:55.110271 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:55.155559 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:55.170136 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:55.178628 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:55.610469 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:55.654825 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:55.669519 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:55.680269 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:56.111465 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:56.156728 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:56.171694 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:56.180526 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:56.615219 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:56.662969 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:56.691235 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:56.700225 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:57.111716 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:57.156896 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:57.171101 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:57.179935 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:57.610789 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:57.657387 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:57.674355 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:57.681711 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:58.113889 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:58.156063 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:58.170171 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:58.181972 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:58.610914 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:58.655403 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:58.670775 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:58.715240 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:59.112159 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:59.155891 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:59.171705 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:59.184825 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:24:59.611096 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:24:59.654837 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:24:59.670205 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:24:59.679666 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:00.133490 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:00.193404 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:00.212230 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:00.219383 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:00.610352 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:00.655766 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:00.670378 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:00.679977 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:01.110328 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:01.160565 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:01.182851 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:01.188306 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:01.610634 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:01.658939 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:01.692525 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:01.693493 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:02.110307 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:02.163158 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:02.175218 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:02.194107 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:02.610289 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:02.659102 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:02.670298 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:02.680078 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:03.110136 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:03.154797 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:03.170914 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:03.179710 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:03.610713 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:03.656026 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:03.674507 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:03.679319 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:04.110829 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:04.158124 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:04.176168 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:04.195690 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:04.610769 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:04.655114 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:04.670393 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:04.678768 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:05.112365 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:05.156453 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:05.171827 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:05.178367 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:05.610955 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:05.654987 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:05.669440 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:05.678163 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:06.110276 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:06.154929 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:06.171086 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:06.188535 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:06.611209 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:06.682604 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:06.694798 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:06.703805 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:07.112232 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:07.155551 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:07.195573 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:07.196491 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:07.619850 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:07.656342 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:07.695769 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:07.706750 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:08.115100 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:08.155636 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:08.170331 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:08.179277 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:08.610934 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:08.655124 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:08.669680 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:08.678282 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:09.110196 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:09.155211 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:09.171235 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:09.179141 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:09.617742 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:09.657656 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:09.685456 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:09.696030 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:10.111786 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:10.157702 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:10.173529 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:10.183589 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:10.612450 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:10.661534 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:10.670581 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:10.680536 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:11.110700 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:11.155158 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:11.170450 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:11.178886 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:11.612207 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:11.658157 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:11.671022 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:11.680771 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:12.120680 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:12.161176 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:12.170376 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:12.182223 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:12.610899 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:12.654661 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:12.670125 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:12.683572 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:13.110042 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:13.166994 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:13.171627 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:13.178995 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:13.611614 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:13.657017 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:13.673624 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:13.678583 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:14.112272 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:14.155968 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:14.175033 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:14.184292 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:14.612790 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:14.654736 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:14.670033 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:14.678782 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:15.111019 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:15.155633 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:15.170164 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:15.179055 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:15.619072 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:15.655777 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:15.670163 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:15.678948 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:16.111439 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:16.155205 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:16.183221 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:16.190879 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:16.611472 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:16.660937 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:16.683070 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:16.712830 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:17.110187 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:17.155138 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:17.169847 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:17.178702 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:17.610487 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:17.654359 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:17.670914 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:17.683882 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:18.110922 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:18.154813 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:18.169501 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:18.178869 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:18.611088 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:18.655308 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:18.671010 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:18.678252 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:19.110345 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:19.154955 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:19.169806 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:19.179718 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:19.613107 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:19.662938 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:19.674455 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:19.688360 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:20.111320 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:20.156283 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:20.170582 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:20.180385 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:20.611259 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:20.657897 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:20.670255 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:20.678941 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:21.112002 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:21.161409 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:21.170202 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:21.179985 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:21.617004 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:21.655577 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:21.670669 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:21.684501 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:22.114357 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:22.161408 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:22.175849 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:22.180502 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:22.615505 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:22.656539 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:22.670305 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:22.679337 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:23.115210 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:23.154637 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:23.172530 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 21:25:23.191462 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:23.610249 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:23.655327 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:23.670127 1048225 kapi.go:107] duration metric: took 1m7.041792964s to wait for kubernetes.io/minikube-addons=registry ...
	I1002 21:25:23.678701 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:24.110911 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:24.155901 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:24.178600 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:24.610420 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:24.658818 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:24.678665 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:25.111044 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:25.155312 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:25.179824 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:25.610981 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:25.655325 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:25.679145 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:26.112476 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:26.155455 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:26.179195 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:26.613820 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:26.658633 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:26.680299 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:27.110722 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 21:25:27.155041 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:27.178623 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:27.610855 1048225 kapi.go:107] duration metric: took 1m7.627604615s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 21:25:27.613170 1048225 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-598993 cluster.
	I1002 21:25:27.615392 1048225 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 21:25:27.617487 1048225 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 21:25:27.655532 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:27.683204 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:28.155483 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:28.178225 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:28.654682 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:28.679440 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:29.158381 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:29.184250 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:29.655085 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:29.679594 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:30.157970 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:30.178954 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:30.654963 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:30.678719 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:31.156548 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:31.181509 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:31.665188 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:31.683152 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:32.159407 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:32.178803 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:32.656185 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:32.678137 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:33.163886 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:33.179686 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:33.657367 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:33.680724 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:34.154840 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:34.178779 1048225 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 21:25:34.654865 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:34.678310 1048225 kapi.go:107] duration metric: took 1m18.055906548s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 21:25:35.156653 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:35.674547 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:36.155260 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:36.660287 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:37.155321 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:37.655251 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:38.154831 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:38.654776 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:39.155107 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:39.656879 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:40.155726 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:40.655041 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:41.155140 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:41.663734 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:42.160060 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:42.655130 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:43.157839 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:43.656054 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:44.155162 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:44.655578 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:45.193548 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:45.660180 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:46.156126 1048225 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 21:25:46.655913 1048225 kapi.go:107] duration metric: took 1m29.57381579s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 21:25:46.658445 1048225 out.go:177] * Enabled addons: default-storageclass, ingress-dns, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1002 21:25:46.660553 1048225 addons.go:502] enable addons completed in 1m36.223353908s: enabled=[default-storageclass ingress-dns cloud-spanner storage-provisioner inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1002 21:25:46.660649 1048225 start.go:233] waiting for cluster config update ...
	I1002 21:25:46.660705 1048225 start.go:242] writing updated cluster config ...
	I1002 21:25:46.661104 1048225 ssh_runner.go:195] Run: rm -f paused
	I1002 21:25:46.779726 1048225 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 21:25:46.795588 1048225 out.go:177] * Done! kubectl is now configured to use "addons-598993" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.427345135Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=5af4c407-5081-435e-916c-08c02a703227 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.427551230Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=5af4c407-5081-435e-916c-08c02a703227 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.429704222Z" level=info msg="Creating container: default/hello-world-app-5d77478584-9hfzm/hello-world-app" id=372b3dd6-e78f-426d-828f-2ff5f4ceeafc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.429805833Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.521041044Z" level=info msg="Created container f2a5aad62aec30d9c23593630e82219a113538c4ab189c7843195f5b60961e6c: default/hello-world-app-5d77478584-9hfzm/hello-world-app" id=372b3dd6-e78f-426d-828f-2ff5f4ceeafc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.521984827Z" level=info msg="Starting container: f2a5aad62aec30d9c23593630e82219a113538c4ab189c7843195f5b60961e6c" id=67ebc029-1d76-4c20-a4f8-99ab24450eb4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:29:01 addons-598993 conmon[8434]: conmon f2a5aad62aec30d9c235 <ninfo>: container 8445 exited with status 1
	Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.540654996Z" level=info msg="Started container" PID=8445 containerID=f2a5aad62aec30d9c23593630e82219a113538c4ab189c7843195f5b60961e6c description=default/hello-world-app-5d77478584-9hfzm/hello-world-app id=67ebc029-1d76-4c20-a4f8-99ab24450eb4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ced97715631a835938f09834e2178bb871fa621fca1c4995d243cf54578893df
	Oct 02 21:29:01 addons-598993 crio[889]: time="2023-10-02 21:29:01.947142370Z" level=info msg="Stopping container: d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91 (timeout: 2s)" id=f57eb3a4-2ef0-4707-b6d8-3d1351ee09e0 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 21:29:02 addons-598993 crio[889]: time="2023-10-02 21:29:02.245566736Z" level=info msg="Removing container: c0a006d791ca681c20a94eaa75fbbd4283287c2e1ac60818e4711ff36fd3f02f" id=3adf677f-1ce7-4d4c-bdb9-e22672e93a91 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:29:02 addons-598993 crio[889]: time="2023-10-02 21:29:02.273696981Z" level=info msg="Removed container c0a006d791ca681c20a94eaa75fbbd4283287c2e1ac60818e4711ff36fd3f02f: default/hello-world-app-5d77478584-9hfzm/hello-world-app" id=3adf677f-1ce7-4d4c-bdb9-e22672e93a91 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:29:03 addons-598993 crio[889]: time="2023-10-02 21:29:03.957061879Z" level=warning msg="Stopping container d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=f57eb3a4-2ef0-4707-b6d8-3d1351ee09e0 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 21:29:04 addons-598993 conmon[4930]: conmon d6945cb55c962cc8317a <ninfo>: container 4952 exited with status 137
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.128364900Z" level=info msg="Stopped container d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91: ingress-nginx/ingress-nginx-controller-f6b66b4b9-6ntgt/controller" id=f57eb3a4-2ef0-4707-b6d8-3d1351ee09e0 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.129127375Z" level=info msg="Stopping pod sandbox: a91966702d2867c871e464d8ebfc0c1880262760140e8a664bb39bcb653c77ca" id=bda93ea0-6fd6-4e3d-bef1-829b3ac5b0b5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.134406367Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-T26N5C6ONUAV65JD - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-746AT2KS7XGNRV5B - [0:0]\n-X KUBE-HP-T26N5C6ONUAV65JD\n-X KUBE-HP-746AT2KS7XGNRV5B\nCOMMIT\n"
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.139530028Z" level=info msg="Closing host port tcp:80"
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.139612112Z" level=info msg="Closing host port tcp:443"
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.144982024Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.145015206Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.145177675Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-f6b66b4b9-6ntgt Namespace:ingress-nginx ID:a91966702d2867c871e464d8ebfc0c1880262760140e8a664bb39bcb653c77ca UID:10f44c59-5e95-4757-8f50-0a2240669870 NetNS:/var/run/netns/f5fd2d48-9f20-44da-9d21-c5531e1d88c8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.145337223Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-f6b66b4b9-6ntgt from CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.196978403Z" level=info msg="Stopped pod sandbox: a91966702d2867c871e464d8ebfc0c1880262760140e8a664bb39bcb653c77ca" id=bda93ea0-6fd6-4e3d-bef1-829b3ac5b0b5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.252299756Z" level=info msg="Removing container: d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91" id=ef211616-9bff-4e48-8935-2b422507a551 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 21:29:04 addons-598993 crio[889]: time="2023-10-02 21:29:04.271577685Z" level=info msg="Removed container d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91: ingress-nginx/ingress-nginx-controller-f6b66b4b9-6ntgt/controller" id=ef211616-9bff-4e48-8935-2b422507a551 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f2a5aad62aec3       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                             7 seconds ago        Exited              hello-world-app           2                   ced97715631a8       hello-world-app-5d77478584-9hfzm
	fbad157326646       ghcr.io/headlamp-k8s/headlamp@sha256:44b17c125fc5da7899f2583ca3468a31cc80ea52c9ef2aad503f58d91908e4c1                        About a minute ago   Running             headlamp                  0                   29fb33e69a3db       headlamp-58b88cff49-2dxlr
	677b8e1c9659f       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                              2 minutes ago        Running             nginx                     0                   9db86e9dcca8d       nginx
	ca2551f81e65e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago        Running             gcp-auth                  0                   5560d15c12a51       gcp-auth-d4c87556c-zgbrg
	26027bfddb4fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago        Exited              patch                     0                   bbb5d6d9c6e64       ingress-nginx-admission-patch-xrz48
	4c4b9057bd012       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago        Exited              create                    0                   81d893f44c53f       ingress-nginx-admission-create-np8mg
	b8876fc5b7043       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago        Running             storage-provisioner       0                   411a9a84757e5       storage-provisioner
	87cbba7d4a2a9       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago        Running             coredns                   0                   20ba071065101       coredns-5dd5756b68-gzc5v
	d377995849de8       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                                             4 minutes ago        Running             kube-proxy                0                   799fb5bd412e5       kube-proxy-z2xsp
	3acfff9baa3ac       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             4 minutes ago        Running             kindnet-cni               0                   9ad2243ff893e       kindnet-578ms
	3dd015839e215       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                                             5 minutes ago        Running             kube-apiserver            0                   005dc9a614121       kube-apiserver-addons-598993
	fd317362d70eb       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                                             5 minutes ago        Running             kube-scheduler            0                   725b04e210d85       kube-scheduler-addons-598993
	c610d971adda4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago        Running             etcd                      0                   fc42ca1a3928c       etcd-addons-598993
	eb50588df2df3       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                                             5 minutes ago        Running             kube-controller-manager   0                   5bdc9562614db       kube-controller-manager-addons-598993
	
	* 
	* ==> coredns [87cbba7d4a2a9ddaac5f3f454e6d5a19471300a50da9b0ca82b84b0122537bfd] <==
	* [INFO] 10.244.0.17:54770 - 57991 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000254029s
	[INFO] 10.244.0.17:54770 - 45931 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002066543s
	[INFO] 10.244.0.17:52186 - 1266 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002132347s
	[INFO] 10.244.0.17:52186 - 19802 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00183159s
	[INFO] 10.244.0.17:54770 - 6665 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00200932s
	[INFO] 10.244.0.17:52186 - 12868 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000105108s
	[INFO] 10.244.0.17:54770 - 23438 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000346181s
	[INFO] 10.244.0.17:38535 - 61100 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117702s
	[INFO] 10.244.0.17:50956 - 21356 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000336589s
	[INFO] 10.244.0.17:38535 - 61094 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000086071s
	[INFO] 10.244.0.17:50956 - 54615 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087302s
	[INFO] 10.244.0.17:50956 - 33307 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00011392s
	[INFO] 10.244.0.17:38535 - 39835 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084898s
	[INFO] 10.244.0.17:38535 - 62809 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000070285s
	[INFO] 10.244.0.17:50956 - 32749 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045439s
	[INFO] 10.244.0.17:50956 - 37187 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052578s
	[INFO] 10.244.0.17:38535 - 56113 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000128262s
	[INFO] 10.244.0.17:50956 - 52620 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063794s
	[INFO] 10.244.0.17:38535 - 58951 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065484s
	[INFO] 10.244.0.17:50956 - 35147 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001755586s
	[INFO] 10.244.0.17:38535 - 25612 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003652627s
	[INFO] 10.244.0.17:50956 - 8768 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001655246s
	[INFO] 10.244.0.17:38535 - 16585 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001548596s
	[INFO] 10.244.0.17:50956 - 2593 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.001087741s
	[INFO] 10.244.0.17:38535 - 15242 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000082871s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-598993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-598993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86
	                    minikube.k8s.io/name=addons-598993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T21_23_58_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-598993
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 21:23:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-598993
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 21:29:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 21:29:04 +0000   Mon, 02 Oct 2023 21:23:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 21:29:04 +0000   Mon, 02 Oct 2023 21:23:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 21:29:04 +0000   Mon, 02 Oct 2023 21:23:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 21:29:04 +0000   Mon, 02 Oct 2023 21:24:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-598993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 393180d4fb6c4075a5c6a556f023c996
	  System UUID:                09ee89ce-4966-48a5-8e56-deaab6494fc0
	  Boot ID:                    37d51973-0c20-4c15-81f3-7000eb353560
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-9hfzm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  gcp-auth                    gcp-auth-d4c87556c-zgbrg                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  headlamp                    headlamp-58b88cff49-2dxlr                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 coredns-5dd5756b68-gzc5v                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m59s
	  kube-system                 etcd-addons-598993                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m12s
	  kube-system                 kindnet-578ms                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m59s
	  kube-system                 kube-apiserver-addons-598993             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-controller-manager-addons-598993    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-proxy-z2xsp                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-addons-598993             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m53s  kube-proxy       
	  Normal  Starting                 5m12s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m12s  kubelet          Node addons-598993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m12s  kubelet          Node addons-598993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m12s  kubelet          Node addons-598993 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m     node-controller  Node addons-598993 event: Registered Node addons-598993 in Controller
	  Normal  NodeReady                4m26s  kubelet          Node addons-598993 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001052] FS-Cache: O-key=[8] '995f3b0000000000'
	[  +0.000713] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000946] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000734ba06c
	[  +0.001060] FS-Cache: N-key=[8] '995f3b0000000000'
	[  +0.002690] FS-Cache: Duplicate cookie detected
	[  +0.000682] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001011] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=0000000014d34bf7
	[  +0.001054] FS-Cache: O-key=[8] '995f3b0000000000'
	[  +0.000708] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=0000000092679c6a
	[  +0.001069] FS-Cache: N-key=[8] '995f3b0000000000'
	[  +3.517964] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000165fee4f
	[  +0.001069] FS-Cache: O-key=[8] '985f3b0000000000'
	[  +0.000712] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000ce3b1a8e
	[  +0.001039] FS-Cache: N-key=[8] '985f3b0000000000'
	[  +0.365924] FS-Cache: Duplicate cookie detected
	[  +0.000713] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=0000000012a16242
	[  +0.001109] FS-Cache: O-key=[8] '9e5f3b0000000000'
	[  +0.000704] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=0000000040870a8c
	[  +0.001038] FS-Cache: N-key=[8] '9e5f3b0000000000'
	
	* 
	* ==> etcd [c610d971adda465e5fd1a9361d327f915586d2aeecd3d05971e0fc20aefd2546] <==
	* {"level":"info","ts":"2023-10-02T21:23:50.570527Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-02T21:23:51.437242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T21:23:51.437377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T21:23:51.437424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-02T21:23:51.437473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T21:23:51.43751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-02T21:23:51.437552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-02T21:23:51.43759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-02T21:23:51.441352Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:23:51.443493Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-598993 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T21:23:51.445278Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:23:51.445445Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:23:51.44551Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:23:51.445549Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T21:23:51.446801Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T21:23:51.451335Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T21:23:51.457483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-02T21:23:51.481261Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T21:23:51.481459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-10-02T21:24:12.403282Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.256187ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128024200668809318 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/certificate-controller\" mod_revision:207 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" value_size:139 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/certificate-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-02T21:24:12.403458Z","caller":"traceutil/trace.go:171","msg":"trace[437178263] linearizableReadLoop","detail":"{readStateIndex:373; appliedIndex:372; }","duration":"185.977649ms","start":"2023-10-02T21:24:12.21747Z","end":"2023-10-02T21:24:12.403447Z","steps":["trace[437178263] 'read index received'  (duration: 91.487µs)","trace[437178263] 'applied index is now lower than readState.Index'  (duration: 185.884604ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-02T21:24:12.403531Z","caller":"traceutil/trace.go:171","msg":"trace[884262186] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"287.012556ms","start":"2023-10-02T21:24:12.116509Z","end":"2023-10-02T21:24:12.403521Z","steps":["trace[884262186] 'process raft request'  (duration: 16.627129ms)","trace[884262186] 'compare'  (duration: 181.138468ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-02T21:24:12.403623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.170838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2023-10-02T21:24:12.413823Z","caller":"traceutil/trace.go:171","msg":"trace[339653850] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-5dd5756b68; range_end:; response_count:1; response_revision:363; }","duration":"196.363552ms","start":"2023-10-02T21:24:12.217439Z","end":"2023-10-02T21:24:12.413803Z","steps":["trace[339653850] 'agreement among raft nodes before linearized reading'  (duration: 186.107388ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-02T21:24:13.421974Z","caller":"traceutil/trace.go:171","msg":"trace[58624557] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"111.048152ms","start":"2023-10-02T21:24:13.310911Z","end":"2023-10-02T21:24:13.421959Z","steps":["trace[58624557] 'process raft request'  (duration: 110.950159ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [ca2551f81e65ee8cd02799f81df86e40f9ac1a9d94d5cfe63b7e7ec8981175e5] <==
	* 2023/10/02 21:25:26 GCP Auth Webhook started!
	2023/10/02 21:25:57 Ready to marshal response ...
	2023/10/02 21:25:57 Ready to write response ...
	2023/10/02 21:26:18 Ready to marshal response ...
	2023/10/02 21:26:18 Ready to write response ...
	2023/10/02 21:26:20 Ready to marshal response ...
	2023/10/02 21:26:20 Ready to write response ...
	2023/10/02 21:26:45 Ready to marshal response ...
	2023/10/02 21:26:45 Ready to write response ...
	2023/10/02 21:27:07 Ready to marshal response ...
	2023/10/02 21:27:07 Ready to write response ...
	2023/10/02 21:27:07 Ready to marshal response ...
	2023/10/02 21:27:07 Ready to write response ...
	2023/10/02 21:27:16 Ready to marshal response ...
	2023/10/02 21:27:16 Ready to write response ...
	2023/10/02 21:28:01 Ready to marshal response ...
	2023/10/02 21:28:01 Ready to write response ...
	2023/10/02 21:28:01 Ready to marshal response ...
	2023/10/02 21:28:01 Ready to write response ...
	2023/10/02 21:28:01 Ready to marshal response ...
	2023/10/02 21:28:01 Ready to write response ...
	2023/10/02 21:28:43 Ready to marshal response ...
	2023/10/02 21:28:43 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:29:09 up  4:11,  0 users,  load average: 0.61, 1.91, 2.88
	Linux addons-598993 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3acfff9baa3ac2a2a3923d560a9c00731994c6fd6b4b8a404e22c935ce160912] <==
	* I1002 21:27:03.450227       1 main.go:227] handling current node
	I1002 21:27:13.453849       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:27:13.453877       1 main.go:227] handling current node
	I1002 21:27:23.466926       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:27:23.466955       1 main.go:227] handling current node
	I1002 21:27:33.478238       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:27:33.478284       1 main.go:227] handling current node
	I1002 21:27:43.482062       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:27:43.482092       1 main.go:227] handling current node
	I1002 21:27:53.492902       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:27:53.492939       1 main.go:227] handling current node
	I1002 21:28:03.506926       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:28:03.509784       1 main.go:227] handling current node
	I1002 21:28:13.522291       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:28:13.522321       1 main.go:227] handling current node
	I1002 21:28:23.533569       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:28:23.533670       1 main.go:227] handling current node
	I1002 21:28:33.545701       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:28:33.545817       1 main.go:227] handling current node
	I1002 21:28:43.573998       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:28:43.574041       1 main.go:227] handling current node
	I1002 21:28:53.585532       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:28:53.585561       1 main.go:227] handling current node
	I1002 21:29:03.589982       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:29:03.590011       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3dd015839e2156d3ef729b792fea660a09ed29ca6f043cc2de4c9227b7729f1f] <==
	* I1002 21:26:20.736210       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.53.63"}
	I1002 21:26:31.852556       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1002 21:26:55.273538       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1002 21:27:00.813845       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.814003       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 21:27:00.824063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.824372       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 21:27:00.835244       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.835378       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 21:27:00.856497       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.856564       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 21:27:00.876778       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.876827       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 21:27:00.876896       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.876930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 21:27:00.894452       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.894509       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 21:27:00.906463       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 21:27:00.906606       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 21:27:01.877897       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1002 21:27:01.904727       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 21:27:01.948426       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1002 21:27:32.559302       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1002 21:28:01.031799       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.81.83"}
	I1002 21:28:43.834288       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.1.205"}
	
	* 
	* ==> kube-controller-manager [eb50588df2df33ff33e638037f61616ae1a7a8b01076bc8de2feae2fbfc5e0a7] <==
	* I1002 21:28:06.156005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="11.227707ms"
	I1002 21:28:06.156867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-58b88cff49" duration="35.7µs"
	W1002 21:28:17.040093       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 21:28:17.040126       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 21:28:30.391459       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 21:28:30.391493       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1002 21:28:36.427272       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 21:28:36.427309       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1002 21:28:43.536735       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1002 21:28:43.580355       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-9hfzm"
	I1002 21:28:43.593844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.937998ms"
	I1002 21:28:43.609133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.206512ms"
	I1002 21:28:43.633479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="24.117865ms"
	I1002 21:28:43.633572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.353µs"
	I1002 21:28:46.230552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="79.368µs"
	I1002 21:28:47.242724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.929µs"
	I1002 21:28:48.223960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.402µs"
	W1002 21:28:53.160911       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 21:28:53.160945       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1002 21:29:00.932124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-f6b66b4b9" duration="12.472µs"
	I1002 21:29:00.932912       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1002 21:29:00.940950       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1002 21:29:02.260283       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.476µs"
	W1002 21:29:04.483309       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1002 21:29:04.483425       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [d377995849de80e2b73ad353bbbccac260a6da17aed4e361c5852793f2147e6a] <==
	* I1002 21:24:15.896936       1 server_others.go:69] "Using iptables proxy"
	I1002 21:24:16.050426       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1002 21:24:16.337250       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:24:16.340544       1 server_others.go:152] "Using iptables Proxier"
	I1002 21:24:16.341323       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 21:24:16.341413       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 21:24:16.341521       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 21:24:16.341798       1 server.go:846] "Version info" version="v1.28.2"
	I1002 21:24:16.342052       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:24:16.343578       1 config.go:188] "Starting service config controller"
	I1002 21:24:16.343697       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 21:24:16.343750       1 config.go:97] "Starting endpoint slice config controller"
	I1002 21:24:16.343795       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 21:24:16.345012       1 config.go:315] "Starting node config controller"
	I1002 21:24:16.345077       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 21:24:16.444281       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 21:24:16.444358       1 shared_informer.go:318] Caches are synced for service config
	I1002 21:24:16.445790       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [fd317362d70ebc247139640135fb200441b325d06e62100d88b52c374c60ef3d] <==
	* W1002 21:23:54.247418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 21:23:54.247512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 21:23:54.247693       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 21:23:54.247778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 21:23:54.247902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 21:23:54.248469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 21:23:54.248581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 21:23:54.248672       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 21:23:54.248116       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 21:23:54.248782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 21:23:54.248208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 21:23:54.248382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 21:23:54.248400       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 21:23:54.247961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 21:23:54.250267       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 21:23:54.250770       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 21:23:54.250836       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 21:23:54.250873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 21:23:55.063612       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 21:23:55.063975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1002 21:23:55.151694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 21:23:55.151741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 21:23:55.369008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 21:23:55.369040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1002 21:23:55.635206       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 02 21:28:57 addons-598993 kubelet[1354]: E1002 21:28:57.692272    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/571172ef3924f4027b427b162b6d4e2edd165096cffad8a4e3891b1f9d9ce370/diff" to get inode usage: stat /var/lib/containers/storage/overlay/571172ef3924f4027b427b162b6d4e2edd165096cffad8a4e3891b1f9d9ce370/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 21:28:57 addons-598993 kubelet[1354]: E1002 21:28:57.701369    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3dc12bdf8b100e58538bb046ee0dabfa52399bff7d0e52d8f2e976cbc7068306/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3dc12bdf8b100e58538bb046ee0dabfa52399bff7d0e52d8f2e976cbc7068306/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 21:28:57 addons-598993 kubelet[1354]: E1002 21:28:57.711168    1354 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bd24c203047632982b1458ff0984e98fb6a7235605e3562551535dc694000fa7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bd24c203047632982b1458ff0984e98fb6a7235605e3562551535dc694000fa7/diff: no such file or directory, extraDiskErr: <nil>
	Oct 02 21:28:59 addons-598993 kubelet[1354]: I1002 21:28:59.854379    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6sqr\" (UniqueName: \"kubernetes.io/projected/9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3-kube-api-access-s6sqr\") pod \"9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3\" (UID: \"9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3\") "
	Oct 02 21:28:59 addons-598993 kubelet[1354]: I1002 21:28:59.857947    1354 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3-kube-api-access-s6sqr" (OuterVolumeSpecName: "kube-api-access-s6sqr") pod "9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3" (UID: "9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3"). InnerVolumeSpecName "kube-api-access-s6sqr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 21:28:59 addons-598993 kubelet[1354]: I1002 21:28:59.955099    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-s6sqr\" (UniqueName: \"kubernetes.io/projected/9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3-kube-api-access-s6sqr\") on node \"addons-598993\" DevicePath \"\""
	Oct 02 21:29:00 addons-598993 kubelet[1354]: I1002 21:29:00.236730    1354 scope.go:117] "RemoveContainer" containerID="f3a9aa7e722bf5b5e584cca47718ee08a8b67e9115a3169d8ac790ecfef5380a"
	Oct 02 21:29:01 addons-598993 kubelet[1354]: I1002 21:29:01.424990    1354 scope.go:117] "RemoveContainer" containerID="c0a006d791ca681c20a94eaa75fbbd4283287c2e1ac60818e4711ff36fd3f02f"
	Oct 02 21:29:01 addons-598993 kubelet[1354]: I1002 21:29:01.427804    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7f5f4eed-e1e8-4881-b632-7aafd637b842" path="/var/lib/kubelet/pods/7f5f4eed-e1e8-4881-b632-7aafd637b842/volumes"
	Oct 02 21:29:01 addons-598993 kubelet[1354]: I1002 21:29:01.428186    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="994d0400-74e9-4d84-adba-6d763733546d" path="/var/lib/kubelet/pods/994d0400-74e9-4d84-adba-6d763733546d/volumes"
	Oct 02 21:29:01 addons-598993 kubelet[1354]: I1002 21:29:01.428535    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3" path="/var/lib/kubelet/pods/9ef2a77a-1d6f-4ee1-b6e8-3ab146dbe7a3/volumes"
	Oct 02 21:29:02 addons-598993 kubelet[1354]: I1002 21:29:02.243638    1354 scope.go:117] "RemoveContainer" containerID="c0a006d791ca681c20a94eaa75fbbd4283287c2e1ac60818e4711ff36fd3f02f"
	Oct 02 21:29:02 addons-598993 kubelet[1354]: I1002 21:29:02.243867    1354 scope.go:117] "RemoveContainer" containerID="f2a5aad62aec30d9c23593630e82219a113538c4ab189c7843195f5b60961e6c"
	Oct 02 21:29:02 addons-598993 kubelet[1354]: E1002 21:29:02.244139    1354 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-9hfzm_default(258b5660-d274-4316-b2ff-ecf015008819)\"" pod="default/hello-world-app-5d77478584-9hfzm" podUID="258b5660-d274-4316-b2ff-ecf015008819"
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.250418    1354 scope.go:117] "RemoveContainer" containerID="d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91"
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.271932    1354 scope.go:117] "RemoveContainer" containerID="d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91"
	Oct 02 21:29:04 addons-598993 kubelet[1354]: E1002 21:29:04.272366    1354 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91\": container with ID starting with d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91 not found: ID does not exist" containerID="d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91"
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.272414    1354 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91"} err="failed to get container status \"d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91\": rpc error: code = NotFound desc = could not find container \"d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91\": container with ID starting with d6945cb55c962cc8317ac115e6753765b8cf0c9d04b7659c20198811d3c96d91 not found: ID does not exist"
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.294792    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10f44c59-5e95-4757-8f50-0a2240669870-webhook-cert\") pod \"10f44c59-5e95-4757-8f50-0a2240669870\" (UID: \"10f44c59-5e95-4757-8f50-0a2240669870\") "
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.294854    1354 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhkw8\" (UniqueName: \"kubernetes.io/projected/10f44c59-5e95-4757-8f50-0a2240669870-kube-api-access-lhkw8\") pod \"10f44c59-5e95-4757-8f50-0a2240669870\" (UID: \"10f44c59-5e95-4757-8f50-0a2240669870\") "
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.297173    1354 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f44c59-5e95-4757-8f50-0a2240669870-kube-api-access-lhkw8" (OuterVolumeSpecName: "kube-api-access-lhkw8") pod "10f44c59-5e95-4757-8f50-0a2240669870" (UID: "10f44c59-5e95-4757-8f50-0a2240669870"). InnerVolumeSpecName "kube-api-access-lhkw8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.301417    1354 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f44c59-5e95-4757-8f50-0a2240669870-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "10f44c59-5e95-4757-8f50-0a2240669870" (UID: "10f44c59-5e95-4757-8f50-0a2240669870"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.395429    1354 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10f44c59-5e95-4757-8f50-0a2240669870-webhook-cert\") on node \"addons-598993\" DevicePath \"\""
	Oct 02 21:29:04 addons-598993 kubelet[1354]: I1002 21:29:04.395474    1354 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lhkw8\" (UniqueName: \"kubernetes.io/projected/10f44c59-5e95-4757-8f50-0a2240669870-kube-api-access-lhkw8\") on node \"addons-598993\" DevicePath \"\""
	Oct 02 21:29:05 addons-598993 kubelet[1354]: I1002 21:29:05.425905    1354 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="10f44c59-5e95-4757-8f50-0a2240669870" path="/var/lib/kubelet/pods/10f44c59-5e95-4757-8f50-0a2240669870/volumes"
	
	* 
	* ==> storage-provisioner [b8876fc5b7043d6fa2722be225556f7e9a964c6d1c084348d5dac118cd7c28c7] <==
	* I1002 21:24:44.673467       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:24:44.746964       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:24:44.747087       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 21:24:44.825100       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:24:44.826297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-598993_1c26d121-2cd5-4932-ad80-76185cee88b0!
	I1002 21:24:44.834589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bd9eafc-75cb-4275-9ceb-3cec6bce08a6", APIVersion:"v1", ResourceVersion:"811", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-598993_1c26d121-2cd5-4932-ad80-76185cee88b0 became leader
	I1002 21:24:44.926493       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-598993_1c26d121-2cd5-4932-ad80-76185cee88b0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-598993 -n addons-598993
helpers_test.go:261: (dbg) Run:  kubectl --context addons-598993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (171.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (15.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdspecific-port906767588/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (625.2448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.117155ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.26095ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (495.020555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.423238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/10/02 21:34:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.80178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.608516ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 15.151658057s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (361.563439ms)

                                                
                                                
-- stdout --
	total 8
	drwxr-xr-x 2 root root 4096 Oct  2 21:33 .
	drwxr-xr-x 1 root root 4096 Oct  2 21:33 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-arm64 -p functional-277432 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "sudo umount -f /mount-9p": exit status 1 (368.016532ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-277432 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdspecific-port906767588/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdspecific-port906767588/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdspecific-port906767588/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:46464
* Userspace file server: ufs starting
* Userspace file server is shutdown

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdspecific-port906767588/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I1002 21:34:01.774789 1073309 out.go:296] Setting OutFile to fd 1 ...
I1002 21:34:01.775550 1073309 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:01.775563 1073309 out.go:309] Setting ErrFile to fd 2...
I1002 21:34:01.775569 1073309 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:01.775886 1073309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
I1002 21:34:01.776243 1073309 mustload.go:65] Loading cluster: functional-277432
I1002 21:34:01.776666 1073309 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:01.777186 1073309 cli_runner.go:164] Run: docker container inspect functional-277432 --format={{.State.Status}}
I1002 21:34:01.818350 1073309 host.go:66] Checking if "functional-277432" exists ...
I1002 21:34:01.818684 1073309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 21:34:02.031563 1073309 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-02 21:34:02.017958571 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
I1002 21:34:02.031730 1073309 cli_runner.go:164] Run: docker network inspect functional-277432 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:34:02.084492 1073309 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdspecific-port906767588/001 into VM as /mount-9p ...
I1002 21:34:02.086645 1073309 out.go:177]   - Mount type:   9p
I1002 21:34:02.088553 1073309 out.go:177]   - User ID:      docker
I1002 21:34:02.090368 1073309 out.go:177]   - Group ID:     docker
I1002 21:34:02.092273 1073309 out.go:177]   - Version:      9p2000.L
I1002 21:34:02.094446 1073309 out.go:177]   - Message Size: 262144
I1002 21:34:02.096622 1073309 out.go:177]   - Options:      map[]
I1002 21:34:02.098777 1073309 out.go:177]   - Bind Address: 192.168.49.1:46464
I1002 21:34:02.100950 1073309 out.go:177] * Userspace file server: 
I1002 21:34:02.101253 1073309 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I1002 21:34:02.103269 1073309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-277432
I1002 21:34:02.103584 1073309 main.go:125] stdlog: ufs.go:27 listen tcp 192.168.49.1:46464: bind: address already in use
I1002 21:34:02.105607 1073309 out.go:177] * Userspace file server is shutdown
I1002 21:34:02.149732 1073309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/functional-277432/id_rsa Username:docker}
I1002 21:34:02.256229 1073309 mount.go:180] unmount for /mount-9p ran successfully
I1002 21:34:02.256256 1073309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1002 21:34:02.280442 1073309 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1002 21:34:02.308096 1073309 out.go:177] 
W1002 21:34:02.310238 1073309 out.go:239] X Exiting due to GUEST_MOUNT: mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p": Process exited with status 32
stdout:

                                                
                                                
stderr:
mount: /mount-9p: mount(2) system call failed: Connection refused.

                                                
                                                
X Exiting due to GUEST_MOUNT: mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p": Process exited with status 32
stdout:

                                                
                                                
stderr:
mount: /mount-9p: mount(2) system call failed: Connection refused.

                                                
                                                
W1002 21:34:02.310262 1073309 out.go:239] * 
* 
W1002 21:34:02.318369 1073309 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_a47fdde85b93d52bc79d06f639033e80169e190e_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_a47fdde85b93d52bc79d06f639033e80169e190e_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 21:34:02.321258 1073309 out.go:177] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (15.99s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (184.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-420597 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-420597 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.18373165s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-420597 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-420597 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3db2903c-79ca-4c32-aff2-15b0a88df7b6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3db2903c-79ca-4c32-aff2-15b0a88df7b6] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.010576853s
addons_test.go:240: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-420597 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1002 21:38:26.869137 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:26.874430 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:26.884706 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:26.905004 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:26.945278 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:27.025647 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:27.186045 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:27.506598 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:28.147666 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:29.428371 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:31.989921 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:37.110656 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:38:47.351834 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-420597 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.708760215s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-420597 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-420597 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1002 21:39:07.833040 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
addons_test.go:275: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.023163553s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:277: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:281: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:284: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-420597 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-420597 addons disable ingress-dns --alsologtostderr -v=1: (2.037325917s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-420597 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-420597 addons disable ingress --alsologtostderr -v=1: (7.568601798s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-420597
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-420597:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1931ed7a0a98b91f2a2625210afc49c74f6f3c9a67c426e52d40f959e4cf4a26",
	        "Created": "2023-10-02T21:34:58.468151984Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1076679,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T21:34:58.798201463Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/1931ed7a0a98b91f2a2625210afc49c74f6f3c9a67c426e52d40f959e4cf4a26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1931ed7a0a98b91f2a2625210afc49c74f6f3c9a67c426e52d40f959e4cf4a26/hostname",
	        "HostsPath": "/var/lib/docker/containers/1931ed7a0a98b91f2a2625210afc49c74f6f3c9a67c426e52d40f959e4cf4a26/hosts",
	        "LogPath": "/var/lib/docker/containers/1931ed7a0a98b91f2a2625210afc49c74f6f3c9a67c426e52d40f959e4cf4a26/1931ed7a0a98b91f2a2625210afc49c74f6f3c9a67c426e52d40f959e4cf4a26-json.log",
	        "Name": "/ingress-addon-legacy-420597",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-420597:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-420597",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/83de20e4adbdc65ff77abbf437cab24ee38b68fcbd449a5f317fe5f16d6b3a53-init/diff:/var/lib/docker/overlay2/211b77e87812a1edc3010e11f8a4e888a425a4aebe773b65e967cb7beecedbef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83de20e4adbdc65ff77abbf437cab24ee38b68fcbd449a5f317fe5f16d6b3a53/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83de20e4adbdc65ff77abbf437cab24ee38b68fcbd449a5f317fe5f16d6b3a53/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83de20e4adbdc65ff77abbf437cab24ee38b68fcbd449a5f317fe5f16d6b3a53/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-420597",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-420597/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-420597",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-420597",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-420597",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2bad487ca397d18867ae106a4c050a856843995babf319540adda5cd44c6117e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33750"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33749"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33748"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33747"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2bad487ca397",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-420597": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1931ed7a0a98",
	                        "ingress-addon-legacy-420597"
	                    ],
	                    "NetworkID": "b864011f0e6994155a2b0520220d95b6a3b5a7139ad3a0e3691485e223bec6ac",
	                    "EndpointID": "2439107ff71b8fce1c0cd4bf9766125a25d64ce91b4bb67bcc2bfe4192cb6e30",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-420597 -n ingress-addon-legacy-420597
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-420597 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-420597 logs -n 25: (1.4620013s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-277432                                                      | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-277432                                                      | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-277432                                                      | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-277432 image ls                                             | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	| image          | functional-277432 image save                                           | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-277432               |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-277432 image rm                                             | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-277432               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-277432 image ls                                             | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	| image          | functional-277432 image load                                           | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-277432 image ls                                             | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	| image          | functional-277432 image save --daemon                                  | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-277432               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-277432                                                      | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-277432                                                      | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-277432                                                      | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-277432 ssh pgrep                                            | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-277432                                                      | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-277432 image build -t                                       | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	|                | localhost/my-image:functional-277432                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-277432 image ls                                             | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	| delete         | -p functional-277432                                                   | functional-277432           | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:34 UTC |
	| start          | -p ingress-addon-legacy-420597                                         | ingress-addon-legacy-420597 | jenkins | v1.31.2 | 02 Oct 23 21:34 UTC | 02 Oct 23 21:36 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-420597                                            | ingress-addon-legacy-420597 | jenkins | v1.31.2 | 02 Oct 23 21:36 UTC | 02 Oct 23 21:36 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-420597                                            | ingress-addon-legacy-420597 | jenkins | v1.31.2 | 02 Oct 23 21:36 UTC | 02 Oct 23 21:36 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-420597                                            | ingress-addon-legacy-420597 | jenkins | v1.31.2 | 02 Oct 23 21:36 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-420597 ip                                         | ingress-addon-legacy-420597 | jenkins | v1.31.2 | 02 Oct 23 21:38 UTC | 02 Oct 23 21:38 UTC |
	| addons         | ingress-addon-legacy-420597                                            | ingress-addon-legacy-420597 | jenkins | v1.31.2 | 02 Oct 23 21:39 UTC | 02 Oct 23 21:39 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-420597                                            | ingress-addon-legacy-420597 | jenkins | v1.31.2 | 02 Oct 23 21:39 UTC | 02 Oct 23 21:39 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 21:34:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:34:37.444510 1076218 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:34:37.444685 1076218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:34:37.444693 1076218 out.go:309] Setting ErrFile to fd 2...
	I1002 21:34:37.444699 1076218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:34:37.444947 1076218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:34:37.445417 1076218 out.go:303] Setting JSON to false
	I1002 21:34:37.446402 1076218 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15425,"bootTime":1696267053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:34:37.446474 1076218 start.go:138] virtualization:  
	I1002 21:34:37.449586 1076218 out.go:177] * [ingress-addon-legacy-420597] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:34:37.452787 1076218 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 21:34:37.454949 1076218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:34:37.452938 1076218 notify.go:220] Checking for updates...
	I1002 21:34:37.459063 1076218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:34:37.461262 1076218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:34:37.463140 1076218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:34:37.465665 1076218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:34:37.468072 1076218 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:34:37.496682 1076218 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:34:37.496794 1076218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:34:37.580601 1076218 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-02 21:34:37.57064473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:34:37.580709 1076218 docker.go:294] overlay module found
	I1002 21:34:37.583294 1076218 out.go:177] * Using the docker driver based on user configuration
	I1002 21:34:37.585390 1076218 start.go:298] selected driver: docker
	I1002 21:34:37.585411 1076218 start.go:902] validating driver "docker" against <nil>
	I1002 21:34:37.585425 1076218 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:34:37.586073 1076218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:34:37.651085 1076218 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 21:34:37.641880077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:34:37.651248 1076218 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 21:34:37.651471 1076218 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:34:37.653540 1076218 out.go:177] * Using Docker driver with root privileges
	I1002 21:34:37.656013 1076218 cni.go:84] Creating CNI manager for ""
	I1002 21:34:37.656036 1076218 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:34:37.656047 1076218 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:34:37.656060 1076218 start_flags.go:321] config:
	{Name:ingress-addon-legacy-420597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-420597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:34:37.658924 1076218 out.go:177] * Starting control plane node ingress-addon-legacy-420597 in cluster ingress-addon-legacy-420597
	I1002 21:34:37.661527 1076218 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:34:37.663597 1076218 out.go:177] * Pulling base image ...
	I1002 21:34:37.666453 1076218 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 21:34:37.666526 1076218 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 21:34:37.683959 1076218 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 21:34:37.683981 1076218 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 21:34:37.738048 1076218 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1002 21:34:37.738093 1076218 cache.go:57] Caching tarball of preloaded images
	I1002 21:34:37.738247 1076218 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 21:34:37.741872 1076218 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1002 21:34:37.743926 1076218 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1002 21:34:37.855627 1076218 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1002 21:34:50.525563 1076218 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1002 21:34:50.525672 1076218 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1002 21:34:51.709531 1076218 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1002 21:34:51.709926 1076218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/config.json ...
	I1002 21:34:51.709962 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/config.json: {Name:mka5cbc0e71033a768bb0408bc66159c7fb66cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:34:51.710144 1076218 cache.go:195] Successfully downloaded all kic artifacts
	I1002 21:34:51.710208 1076218 start.go:365] acquiring machines lock for ingress-addon-legacy-420597: {Name:mk33a4d3196af6c87de7327e59e196f3f43c4410 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:34:51.710272 1076218 start.go:369] acquired machines lock for "ingress-addon-legacy-420597" in 48.304µs
	I1002 21:34:51.710297 1076218 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-420597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-420597 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:34:51.710372 1076218 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:34:51.713005 1076218 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 21:34:51.713378 1076218 start.go:159] libmachine.API.Create for "ingress-addon-legacy-420597" (driver="docker")
	I1002 21:34:51.713414 1076218 client.go:168] LocalClient.Create starting
	I1002 21:34:51.713494 1076218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem
	I1002 21:34:51.713533 1076218 main.go:141] libmachine: Decoding PEM data...
	I1002 21:34:51.713552 1076218 main.go:141] libmachine: Parsing certificate...
	I1002 21:34:51.713616 1076218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem
	I1002 21:34:51.713639 1076218 main.go:141] libmachine: Decoding PEM data...
	I1002 21:34:51.713665 1076218 main.go:141] libmachine: Parsing certificate...
	I1002 21:34:51.714124 1076218 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-420597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:34:51.731071 1076218 cli_runner.go:211] docker network inspect ingress-addon-legacy-420597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:34:51.731160 1076218 network_create.go:281] running [docker network inspect ingress-addon-legacy-420597] to gather additional debugging logs...
	I1002 21:34:51.731182 1076218 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-420597
	W1002 21:34:51.748031 1076218 cli_runner.go:211] docker network inspect ingress-addon-legacy-420597 returned with exit code 1
	I1002 21:34:51.748059 1076218 network_create.go:284] error running [docker network inspect ingress-addon-legacy-420597]: docker network inspect ingress-addon-legacy-420597: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-420597 not found
	I1002 21:34:51.748075 1076218 network_create.go:286] output of [docker network inspect ingress-addon-legacy-420597]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-420597 not found
	
	** /stderr **
	I1002 21:34:51.748206 1076218 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:34:51.767026 1076218 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001031700}
	I1002 21:34:51.767066 1076218 network_create.go:124] attempt to create docker network ingress-addon-legacy-420597 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:34:51.767127 1076218 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-420597 ingress-addon-legacy-420597
	I1002 21:34:51.840946 1076218 network_create.go:108] docker network ingress-addon-legacy-420597 192.168.49.0/24 created
	I1002 21:34:51.840981 1076218 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-420597" container
	I1002 21:34:51.841078 1076218 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:34:51.861791 1076218 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-420597 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-420597 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:34:51.880431 1076218 oci.go:103] Successfully created a docker volume ingress-addon-legacy-420597
	I1002 21:34:51.880520 1076218 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-420597-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-420597 --entrypoint /usr/bin/test -v ingress-addon-legacy-420597:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 21:34:53.396368 1076218 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-420597-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-420597 --entrypoint /usr/bin/test -v ingress-addon-legacy-420597:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.515805383s)
	I1002 21:34:53.396399 1076218 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-420597
	I1002 21:34:53.396412 1076218 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 21:34:53.396431 1076218 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 21:34:53.396519 1076218 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-420597:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:34:58.366327 1076218 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-420597:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.969760302s)
	I1002 21:34:58.366361 1076218 kic.go:199] duration metric: took 4.969923 seconds to extract preloaded images to volume
	W1002 21:34:58.366520 1076218 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:34:58.366652 1076218 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:34:58.452330 1076218 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-420597 --name ingress-addon-legacy-420597 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-420597 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-420597 --network ingress-addon-legacy-420597 --ip 192.168.49.2 --volume ingress-addon-legacy-420597:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 21:34:58.807179 1076218 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-420597 --format={{.State.Running}}
	I1002 21:34:58.829350 1076218 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-420597 --format={{.State.Status}}
	I1002 21:34:58.865804 1076218 cli_runner.go:164] Run: docker exec ingress-addon-legacy-420597 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:34:58.969657 1076218 oci.go:144] the created container "ingress-addon-legacy-420597" has a running status.
	I1002 21:34:58.969705 1076218 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa...
	I1002 21:34:59.825446 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:34:59.825538 1076218 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:34:59.848631 1076218 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-420597 --format={{.State.Status}}
	I1002 21:34:59.871880 1076218 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:34:59.871899 1076218 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-420597 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:34:59.949440 1076218 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-420597 --format={{.State.Status}}
	I1002 21:34:59.974809 1076218 machine.go:88] provisioning docker machine ...
	I1002 21:34:59.974837 1076218 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-420597"
	I1002 21:34:59.974904 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:00.011572 1076218 main.go:141] libmachine: Using SSH client type: native
	I1002 21:35:00.012042 1076218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33750 <nil> <nil>}
	I1002 21:35:00.012065 1076218 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-420597 && echo "ingress-addon-legacy-420597" | sudo tee /etc/hostname
	I1002 21:35:00.244424 1076218 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-420597
	
	I1002 21:35:00.244513 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:00.277425 1076218 main.go:141] libmachine: Using SSH client type: native
	I1002 21:35:00.282015 1076218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33750 <nil> <nil>}
	I1002 21:35:00.282062 1076218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-420597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-420597/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-420597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:35:00.427315 1076218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:35:00.427346 1076218 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 21:35:00.427387 1076218 ubuntu.go:177] setting up certificates
	I1002 21:35:00.427396 1076218 provision.go:83] configureAuth start
	I1002 21:35:00.427470 1076218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-420597
	I1002 21:35:00.451495 1076218 provision.go:138] copyHostCerts
	I1002 21:35:00.451546 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 21:35:00.451583 1076218 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 21:35:00.451595 1076218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 21:35:00.451682 1076218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 21:35:00.451770 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 21:35:00.451794 1076218 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 21:35:00.451803 1076218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 21:35:00.451835 1076218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 21:35:00.451885 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 21:35:00.451907 1076218 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 21:35:00.451914 1076218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 21:35:00.451948 1076218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 21:35:00.451998 1076218 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-420597 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-420597]
	I1002 21:35:00.590488 1076218 provision.go:172] copyRemoteCerts
	I1002 21:35:00.590564 1076218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:35:00.590608 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:00.609714 1076218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa Username:docker}
	I1002 21:35:00.708269 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:35:00.708329 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:35:00.737097 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:35:00.737159 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1002 21:35:00.767000 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:35:00.767082 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:35:00.795992 1076218 provision.go:86] duration metric: configureAuth took 368.57592ms
	I1002 21:35:00.796018 1076218 ubuntu.go:193] setting minikube options for container-runtime
	I1002 21:35:00.796211 1076218 config.go:182] Loaded profile config "ingress-addon-legacy-420597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1002 21:35:00.796322 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:00.814560 1076218 main.go:141] libmachine: Using SSH client type: native
	I1002 21:35:00.815008 1076218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33750 <nil> <nil>}
	I1002 21:35:00.815040 1076218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:35:01.102669 1076218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:35:01.102693 1076218 machine.go:91] provisioned docker machine in 1.127866374s
	I1002 21:35:01.102705 1076218 client.go:171] LocalClient.Create took 9.389280249s
	I1002 21:35:01.102723 1076218 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-420597" took 9.389345455s
	I1002 21:35:01.102731 1076218 start.go:300] post-start starting for "ingress-addon-legacy-420597" (driver="docker")
	I1002 21:35:01.102740 1076218 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:35:01.102827 1076218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:35:01.102873 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:01.121940 1076218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa Username:docker}
	I1002 21:35:01.225323 1076218 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:35:01.229794 1076218 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:35:01.229837 1076218 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 21:35:01.229849 1076218 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 21:35:01.229857 1076218 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 21:35:01.229872 1076218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 21:35:01.229941 1076218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 21:35:01.230028 1076218 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 21:35:01.230038 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> /etc/ssl/certs/10477322.pem
	I1002 21:35:01.230152 1076218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:35:01.241109 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:35:01.272393 1076218 start.go:303] post-start completed in 169.646923ms
	I1002 21:35:01.272818 1076218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-420597
	I1002 21:35:01.295517 1076218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/config.json ...
	I1002 21:35:01.295855 1076218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:35:01.295910 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:01.314379 1076218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa Username:docker}
	I1002 21:35:01.412860 1076218 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:35:01.419404 1076218 start.go:128] duration metric: createHost completed in 9.709013433s
	I1002 21:35:01.419432 1076218 start.go:83] releasing machines lock for "ingress-addon-legacy-420597", held for 9.709146748s
	I1002 21:35:01.419525 1076218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-420597
	I1002 21:35:01.439631 1076218 ssh_runner.go:195] Run: cat /version.json
	I1002 21:35:01.439758 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:01.439643 1076218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:35:01.439846 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:01.464251 1076218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa Username:docker}
	I1002 21:35:01.467641 1076218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa Username:docker}
	I1002 21:35:01.697861 1076218 ssh_runner.go:195] Run: systemctl --version
	I1002 21:35:01.704238 1076218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:35:01.853085 1076218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 21:35:01.859177 1076218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:35:01.885918 1076218 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 21:35:01.886000 1076218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:35:01.929404 1076218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 21:35:01.929431 1076218 start.go:469] detecting cgroup driver to use...
	I1002 21:35:01.929463 1076218 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 21:35:01.929529 1076218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:35:01.951000 1076218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:35:01.966508 1076218 docker.go:197] disabling cri-docker service (if available) ...
	I1002 21:35:01.966576 1076218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:35:01.983840 1076218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:35:02.004071 1076218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:35:02.114423 1076218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:35:02.213724 1076218 docker.go:213] disabling docker service ...
	I1002 21:35:02.213836 1076218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:35:02.236029 1076218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:35:02.251463 1076218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:35:02.352835 1076218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:35:02.462367 1076218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:35:02.476615 1076218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:35:02.496913 1076218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 21:35:02.496980 1076218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:35:02.509153 1076218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:35:02.509363 1076218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:35:02.521865 1076218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:35:02.534755 1076218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:35:02.547708 1076218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:35:02.559248 1076218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:35:02.569843 1076218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:35:02.580072 1076218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:35:02.681871 1076218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:35:02.816410 1076218 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:35:02.816528 1076218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:35:02.821306 1076218 start.go:537] Will wait 60s for crictl version
	I1002 21:35:02.821368 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:02.825755 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:35:02.870917 1076218 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 21:35:02.871011 1076218 ssh_runner.go:195] Run: crio --version
	I1002 21:35:02.919781 1076218 ssh_runner.go:195] Run: crio --version
	I1002 21:35:02.969321 1076218 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1002 21:35:02.971676 1076218 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-420597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:35:02.992082 1076218 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:35:02.997063 1076218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:35:03.013638 1076218 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1002 21:35:03.013714 1076218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:35:03.068744 1076218 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1002 21:35:03.068826 1076218 ssh_runner.go:195] Run: which lz4
	I1002 21:35:03.073673 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1002 21:35:03.073771 1076218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 21:35:03.078549 1076218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 21:35:03.078586 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1002 21:35:05.424995 1076218 crio.go:444] Took 2.351248 seconds to copy over tarball
	I1002 21:35:05.425094 1076218 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 21:35:08.264085 1076218 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.838958893s)
	I1002 21:35:08.264117 1076218 crio.go:451] Took 2.839089 seconds to extract the tarball
	I1002 21:35:08.264127 1076218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 21:35:08.351649 1076218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:35:08.396392 1076218 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1002 21:35:08.396420 1076218 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 21:35:08.396497 1076218 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 21:35:08.396510 1076218 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:35:08.396688 1076218 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 21:35:08.396692 1076218 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 21:35:08.396756 1076218 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 21:35:08.396760 1076218 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 21:35:08.396833 1076218 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 21:35:08.396853 1076218 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1002 21:35:08.397826 1076218 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:35:08.398247 1076218 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 21:35:08.398289 1076218 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 21:35:08.398335 1076218 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 21:35:08.398495 1076218 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 21:35:08.398536 1076218 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1002 21:35:08.398495 1076218 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 21:35:08.400147 1076218 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 21:35:08.839157 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1002 21:35:08.852948 1076218 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 21:35:08.853241 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1002 21:35:08.876405 1076218 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1002 21:35:08.876664 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1002 21:35:08.884388 1076218 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1002 21:35:08.884607 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1002 21:35:08.898531 1076218 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 21:35:08.898771 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1002 21:35:08.913612 1076218 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 21:35:08.913825 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1002 21:35:08.922384 1076218 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1002 21:35:08.922496 1076218 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1002 21:35:08.922599 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:08.940027 1076218 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1002 21:35:08.940115 1076218 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1002 21:35:08.940189 1076218 ssh_runner.go:195] Run: which crictl
	W1002 21:35:08.941484 1076218 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1002 21:35:08.941800 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1002 21:35:08.987873 1076218 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1002 21:35:08.988041 1076218 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:35:09.043790 1076218 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1002 21:35:09.043831 1076218 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1002 21:35:09.043881 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:09.082634 1076218 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1002 21:35:09.082755 1076218 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1002 21:35:09.082833 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:09.087815 1076218 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1002 21:35:09.087917 1076218 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 21:35:09.088001 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:09.098103 1076218 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1002 21:35:09.098199 1076218 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1002 21:35:09.098277 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:09.098405 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1002 21:35:09.098505 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1002 21:35:09.098640 1076218 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1002 21:35:09.098688 1076218 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1002 21:35:09.098741 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:09.261001 1076218 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1002 21:35:09.261047 1076218 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:35:09.261167 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1002 21:35:09.261290 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1002 21:35:09.261359 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1002 21:35:09.261445 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1002 21:35:09.261499 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1002 21:35:09.261564 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1002 21:35:09.261607 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1002 21:35:09.261647 1076218 ssh_runner.go:195] Run: which crictl
	I1002 21:35:09.381104 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1002 21:35:09.395233 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1002 21:35:09.395315 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1002 21:35:09.395324 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1002 21:35:09.395366 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1002 21:35:09.395414 1076218 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:35:09.456346 1076218 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 21:35:09.456464 1076218 cache_images.go:92] LoadImages completed in 1.060032353s
	W1002 21:35:09.456569 1076218 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1002 21:35:09.456670 1076218 ssh_runner.go:195] Run: crio config
	I1002 21:35:09.511098 1076218 cni.go:84] Creating CNI manager for ""
	I1002 21:35:09.511120 1076218 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:35:09.511170 1076218 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 21:35:09.511197 1076218 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-420597 NodeName:ingress-addon-legacy-420597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 21:35:09.511360 1076218 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-420597"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:35:09.511475 1076218 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-420597 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-420597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 21:35:09.511547 1076218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1002 21:35:09.522606 1076218 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:35:09.522685 1076218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:35:09.533195 1076218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1002 21:35:09.554394 1076218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1002 21:35:09.575538 1076218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1002 21:35:09.596594 1076218 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:35:09.601050 1076218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:35:09.614725 1076218 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597 for IP: 192.168.49.2
	I1002 21:35:09.614814 1076218 certs.go:190] acquiring lock for shared ca certs: {Name:mk89a4b04b53a0a6e55cb9c88355018fadb8a1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:09.615003 1076218 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key
	I1002 21:35:09.615077 1076218 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key
	I1002 21:35:09.615146 1076218 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.key
	I1002 21:35:09.615162 1076218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt with IP's: []
	I1002 21:35:09.955660 1076218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt ...
	I1002 21:35:09.955692 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: {Name:mkaa08acff8839a9dd94ff7813182c46f72b0a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:09.955898 1076218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.key ...
	I1002 21:35:09.955920 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.key: {Name:mkdfc6d621985dc65ad9879af6bf20aa750ac07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:09.956014 1076218 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.key.dd3b5fb2
	I1002 21:35:09.956033 1076218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 21:35:10.416990 1076218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.crt.dd3b5fb2 ...
	I1002 21:35:10.417021 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.crt.dd3b5fb2: {Name:mk51a8401764293bfd077ebe3c6604ddbde45624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:10.417223 1076218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.key.dd3b5fb2 ...
	I1002 21:35:10.417247 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.key.dd3b5fb2: {Name:mk67bca6b8b5bddd0a71ccf41d42814625834402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:10.417342 1076218 certs.go:337] copying /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.crt
	I1002 21:35:10.417425 1076218 certs.go:341] copying /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.key
	I1002 21:35:10.417481 1076218 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.key
	I1002 21:35:10.417499 1076218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.crt with IP's: []
	I1002 21:35:10.693799 1076218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.crt ...
	I1002 21:35:10.693829 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.crt: {Name:mka8245e3148de9399e61137ac359781a0c736d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:10.694004 1076218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.key ...
	I1002 21:35:10.694012 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.key: {Name:mkf97b9db246ece47127c0b8bd29acfd632a1ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:10.694077 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:35:10.694092 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:35:10.694108 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:35:10.694131 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:35:10.694150 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:35:10.694165 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:35:10.694179 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:35:10.694193 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:35:10.694244 1076218 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem (1338 bytes)
	W1002 21:35:10.694282 1076218 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732_empty.pem, impossibly tiny 0 bytes
	I1002 21:35:10.694296 1076218 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:35:10.694322 1076218 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:35:10.694352 1076218 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:35:10.694381 1076218 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem (1679 bytes)
	I1002 21:35:10.694436 1076218 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:35:10.694470 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> /usr/share/ca-certificates/10477322.pem
	I1002 21:35:10.694490 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:35:10.694512 1076218 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem -> /usr/share/ca-certificates/1047732.pem
	I1002 21:35:10.695092 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 21:35:10.724028 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:35:10.752785 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:35:10.781663 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:35:10.810717 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:35:10.839281 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:35:10.868441 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:35:10.897181 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:35:10.926497 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /usr/share/ca-certificates/10477322.pem (1708 bytes)
	I1002 21:35:10.955286 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:35:10.984295 1076218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem --> /usr/share/ca-certificates/1047732.pem (1338 bytes)
	I1002 21:35:11.014590 1076218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:35:11.036360 1076218 ssh_runner.go:195] Run: openssl version
	I1002 21:35:11.043609 1076218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1047732.pem && ln -fs /usr/share/ca-certificates/1047732.pem /etc/ssl/certs/1047732.pem"
	I1002 21:35:11.055882 1076218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1047732.pem
	I1002 21:35:11.060984 1076218 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 21:35:11.061057 1076218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1047732.pem
	I1002 21:35:11.069995 1076218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1047732.pem /etc/ssl/certs/51391683.0"
	I1002 21:35:11.082628 1076218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10477322.pem && ln -fs /usr/share/ca-certificates/10477322.pem /etc/ssl/certs/10477322.pem"
	I1002 21:35:11.095066 1076218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10477322.pem
	I1002 21:35:11.100251 1076218 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 21:35:11.100347 1076218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10477322.pem
	I1002 21:35:11.109322 1076218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10477322.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:35:11.121680 1076218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:35:11.133699 1076218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:35:11.138472 1076218 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:35:11.138539 1076218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:35:11.147275 1076218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:35:11.159304 1076218 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 21:35:11.164012 1076218 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 21:35:11.164065 1076218 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-420597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-420597 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:35:11.164140 1076218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:35:11.164201 1076218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:35:11.208736 1076218 cri.go:89] found id: ""
	I1002 21:35:11.208852 1076218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:35:11.219800 1076218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:35:11.230499 1076218 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:35:11.230574 1076218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:35:11.241387 1076218 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:35:11.241428 1076218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:35:11.297409 1076218 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1002 21:35:11.297641 1076218 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 21:35:11.354870 1076218 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:35:11.354942 1076218 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 21:35:11.354979 1076218 kubeadm.go:322] OS: Linux
	I1002 21:35:11.355038 1076218 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 21:35:11.355088 1076218 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 21:35:11.355136 1076218 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 21:35:11.355183 1076218 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 21:35:11.355232 1076218 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 21:35:11.355286 1076218 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 21:35:11.450932 1076218 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:35:11.451111 1076218 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:35:11.451242 1076218 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 21:35:11.700931 1076218 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:35:11.702473 1076218 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:35:11.702777 1076218 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 21:35:11.806685 1076218 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:35:11.812167 1076218 out.go:204]   - Generating certificates and keys ...
	I1002 21:35:11.812347 1076218 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 21:35:11.812457 1076218 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 21:35:12.955128 1076218 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:35:13.287783 1076218 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:35:13.920522 1076218 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:35:14.222669 1076218 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 21:35:15.112797 1076218 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 21:35:15.112969 1076218 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-420597 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:35:15.436517 1076218 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 21:35:15.437037 1076218 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-420597 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:35:16.060969 1076218 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:35:16.773574 1076218 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:35:17.396773 1076218 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 21:35:17.397138 1076218 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:35:18.873804 1076218 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:35:19.602541 1076218 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:35:20.424602 1076218 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:35:21.126044 1076218 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:35:21.127080 1076218 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:35:21.130105 1076218 out.go:204]   - Booting up control plane ...
	I1002 21:35:21.130232 1076218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:35:21.141083 1076218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:35:21.142534 1076218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:35:21.143472 1076218 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:35:21.146094 1076218 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 21:35:33.649533 1076218 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502060 seconds
	I1002 21:35:33.649652 1076218 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:35:33.668034 1076218 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:35:34.192785 1076218 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:35:34.192925 1076218 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-420597 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1002 21:35:34.703181 1076218 kubeadm.go:322] [bootstrap-token] Using token: wa4mpu.ng8hkmmmmy3ewzxd
	I1002 21:35:34.705413 1076218 out.go:204]   - Configuring RBAC rules ...
	I1002 21:35:34.705531 1076218 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:35:34.711625 1076218 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:35:34.725887 1076218 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:35:34.728771 1076218 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:35:34.731999 1076218 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:35:34.738953 1076218 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:35:34.749829 1076218 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:35:35.026716 1076218 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 21:35:35.166842 1076218 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 21:35:35.168244 1076218 kubeadm.go:322] 
	I1002 21:35:35.168315 1076218 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 21:35:35.168329 1076218 kubeadm.go:322] 
	I1002 21:35:35.168404 1076218 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 21:35:35.168415 1076218 kubeadm.go:322] 
	I1002 21:35:35.168439 1076218 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 21:35:35.168499 1076218 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:35:35.168552 1076218 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:35:35.168560 1076218 kubeadm.go:322] 
	I1002 21:35:35.168609 1076218 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 21:35:35.168681 1076218 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:35:35.168751 1076218 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:35:35.168759 1076218 kubeadm.go:322] 
	I1002 21:35:35.168838 1076218 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:35:35.168913 1076218 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 21:35:35.168921 1076218 kubeadm.go:322] 
	I1002 21:35:35.168999 1076218 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wa4mpu.ng8hkmmmmy3ewzxd \
	I1002 21:35:35.169100 1076218 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 \
	I1002 21:35:35.169126 1076218 kubeadm.go:322]     --control-plane 
	I1002 21:35:35.169134 1076218 kubeadm.go:322] 
	I1002 21:35:35.169246 1076218 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:35:35.169258 1076218 kubeadm.go:322] 
	I1002 21:35:35.169334 1076218 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wa4mpu.ng8hkmmmmy3ewzxd \
	I1002 21:35:35.169437 1076218 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 
	I1002 21:35:35.172750 1076218 kubeadm.go:322] W1002 21:35:11.296604    1237 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1002 21:35:35.172963 1076218 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 21:35:35.173066 1076218 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:35:35.173187 1076218 kubeadm.go:322] W1002 21:35:21.140822    1237 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 21:35:35.173346 1076218 kubeadm.go:322] W1002 21:35:21.142326    1237 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1002 21:35:35.173376 1076218 cni.go:84] Creating CNI manager for ""
	I1002 21:35:35.173391 1076218 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:35:35.175523 1076218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 21:35:35.177731 1076218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:35:35.182956 1076218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1002 21:35:35.182988 1076218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 21:35:35.207762 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:35:35.676683 1076218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:35:35.676752 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:35.676811 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86 minikube.k8s.io/name=ingress-addon-legacy-420597 minikube.k8s.io/updated_at=2023_10_02T21_35_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:35.828252 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:35.828322 1076218 ops.go:34] apiserver oom_adj: -16
	I1002 21:35:35.955668 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:36.561689 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:37.061735 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:37.561995 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:38.061685 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:38.561744 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:39.062069 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:39.561217 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:40.061418 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:40.561191 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:41.061169 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:41.561709 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:42.061947 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:42.562092 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:43.062077 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:43.561045 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:44.061174 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:44.561679 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:45.061188 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:45.561588 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:46.061245 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:46.561513 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:47.062080 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:47.561296 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:48.061761 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:48.561542 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:49.061782 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:49.562033 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:50.061632 1076218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:35:50.206448 1076218 kubeadm.go:1081] duration metric: took 14.529755125s to wait for elevateKubeSystemPrivileges.
	I1002 21:35:50.206477 1076218 kubeadm.go:406] StartCluster complete in 39.042415556s
	I1002 21:35:50.206493 1076218 settings.go:142] acquiring lock: {Name:mk84ed9b341869374b10cf082af1bfa542d39dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:50.206567 1076218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:35:50.207325 1076218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/kubeconfig: {Name:mk6186c13a5b804fd6de8f5697b568acedb59886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:35:50.208075 1076218 kapi.go:59] client config for ingress-addon-legacy-420597: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:35:50.209579 1076218 config.go:182] Loaded profile config "ingress-addon-legacy-420597": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1002 21:35:50.209642 1076218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:35:50.209826 1076218 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 21:35:50.209904 1076218 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-420597"
	I1002 21:35:50.209918 1076218 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-420597"
	I1002 21:35:50.209980 1076218 host.go:66] Checking if "ingress-addon-legacy-420597" exists ...
	I1002 21:35:50.210445 1076218 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-420597 --format={{.State.Status}}
	I1002 21:35:50.210823 1076218 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 21:35:50.210871 1076218 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-420597"
	I1002 21:35:50.210892 1076218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-420597"
	I1002 21:35:50.211156 1076218 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-420597 --format={{.State.Status}}
	I1002 21:35:50.279192 1076218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:35:50.276821 1076218 kapi.go:59] client config for ingress-addon-legacy-420597: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:35:50.281662 1076218 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:35:50.281685 1076218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:35:50.281770 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:50.281808 1076218 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-420597"
	I1002 21:35:50.281899 1076218 host.go:66] Checking if "ingress-addon-legacy-420597" exists ...
	I1002 21:35:50.282372 1076218 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-420597 --format={{.State.Status}}
	I1002 21:35:50.317665 1076218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa Username:docker}
	I1002 21:35:50.325911 1076218 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:35:50.325932 1076218 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:35:50.326000 1076218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-420597
	I1002 21:35:50.356529 1076218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/ingress-addon-legacy-420597/id_rsa Username:docker}
	I1002 21:35:50.434811 1076218 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-420597" context rescaled to 1 replicas
	I1002 21:35:50.434900 1076218 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:35:50.437454 1076218 out.go:177] * Verifying Kubernetes components...
	I1002 21:35:50.439521 1076218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:35:50.452300 1076218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:35:50.485526 1076218 kapi.go:59] client config for ingress-addon-legacy-420597: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:35:50.486093 1076218 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-420597" to be "Ready" ...
	I1002 21:35:50.510558 1076218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:35:50.549620 1076218 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:35:51.063367 1076218 start.go:923] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 21:35:51.167254 1076218 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:35:51.169953 1076218 addons.go:502] enable addons completed in 960.095301ms: enabled=[storage-provisioner default-storageclass]
	I1002 21:35:52.968033 1076218 node_ready.go:58] node "ingress-addon-legacy-420597" has status "Ready":"False"
	I1002 21:35:55.467313 1076218 node_ready.go:58] node "ingress-addon-legacy-420597" has status "Ready":"False"
	I1002 21:35:57.967007 1076218 node_ready.go:58] node "ingress-addon-legacy-420597" has status "Ready":"False"
	I1002 21:35:58.967213 1076218 node_ready.go:49] node "ingress-addon-legacy-420597" has status "Ready":"True"
	I1002 21:35:58.967244 1076218 node_ready.go:38] duration metric: took 8.481106015s waiting for node "ingress-addon-legacy-420597" to be "Ready" ...
	I1002 21:35:58.967256 1076218 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:35:58.974736 1076218 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-wdpfn" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:00.982819 1076218 pod_ready.go:102] pod "coredns-66bff467f8-wdpfn" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-02 21:35:50 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1002 21:36:02.986587 1076218 pod_ready.go:102] pod "coredns-66bff467f8-wdpfn" in "kube-system" namespace has status "Ready":"False"
	I1002 21:36:04.485934 1076218 pod_ready.go:92] pod "coredns-66bff467f8-wdpfn" in "kube-system" namespace has status "Ready":"True"
	I1002 21:36:04.485965 1076218 pod_ready.go:81] duration metric: took 5.511193896s waiting for pod "coredns-66bff467f8-wdpfn" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.485977 1076218 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.490916 1076218 pod_ready.go:92] pod "etcd-ingress-addon-legacy-420597" in "kube-system" namespace has status "Ready":"True"
	I1002 21:36:04.490942 1076218 pod_ready.go:81] duration metric: took 4.956705ms waiting for pod "etcd-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.490956 1076218 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.496718 1076218 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-420597" in "kube-system" namespace has status "Ready":"True"
	I1002 21:36:04.496756 1076218 pod_ready.go:81] duration metric: took 5.780233ms waiting for pod "kube-apiserver-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.496768 1076218 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.509301 1076218 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-420597" in "kube-system" namespace has status "Ready":"True"
	I1002 21:36:04.509328 1076218 pod_ready.go:81] duration metric: took 12.551676ms waiting for pod "kube-controller-manager-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.509343 1076218 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q6lx6" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.513958 1076218 pod_ready.go:92] pod "kube-proxy-q6lx6" in "kube-system" namespace has status "Ready":"True"
	I1002 21:36:04.513986 1076218 pod_ready.go:81] duration metric: took 4.635107ms waiting for pod "kube-proxy-q6lx6" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.514003 1076218 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.680361 1076218 request.go:629] Waited for 166.266847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-420597
	I1002 21:36:04.880395 1076218 request.go:629] Waited for 197.312926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-420597
	I1002 21:36:04.883215 1076218 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-420597" in "kube-system" namespace has status "Ready":"True"
	I1002 21:36:04.883242 1076218 pod_ready.go:81] duration metric: took 369.231056ms waiting for pod "kube-scheduler-ingress-addon-legacy-420597" in "kube-system" namespace to be "Ready" ...
	I1002 21:36:04.883255 1076218 pod_ready.go:38] duration metric: took 5.915983381s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:36:04.883271 1076218 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:36:04.883334 1076218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:36:04.896733 1076218 api_server.go:72] duration metric: took 14.461760319s to wait for apiserver process to appear ...
	I1002 21:36:04.896755 1076218 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:36:04.896772 1076218 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 21:36:04.906701 1076218 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 21:36:04.907629 1076218 api_server.go:141] control plane version: v1.18.20
	I1002 21:36:04.907649 1076218 api_server.go:131] duration metric: took 10.887576ms to wait for apiserver health ...
	I1002 21:36:04.907658 1076218 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:36:05.081014 1076218 request.go:629] Waited for 173.29199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:36:05.087527 1076218 system_pods.go:59] 8 kube-system pods found
	I1002 21:36:05.087564 1076218 system_pods.go:61] "coredns-66bff467f8-wdpfn" [a48376a9-a475-4604-8838-2c9560d26448] Running
	I1002 21:36:05.087571 1076218 system_pods.go:61] "etcd-ingress-addon-legacy-420597" [7987d4d5-277c-4454-bb83-6090418f091b] Running
	I1002 21:36:05.087579 1076218 system_pods.go:61] "kindnet-66d4c" [057d8562-8f88-4b17-afb9-929854cf0700] Running
	I1002 21:36:05.087584 1076218 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-420597" [99a47cf2-fbd2-4da4-a97e-814d4948adea] Running
	I1002 21:36:05.087590 1076218 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-420597" [0c250c82-ae25-403f-95f3-f56ba8b3ce8b] Running
	I1002 21:36:05.087595 1076218 system_pods.go:61] "kube-proxy-q6lx6" [9f132c15-d58d-44ed-9328-73421a1237fe] Running
	I1002 21:36:05.087608 1076218 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-420597" [6991590f-273e-4d80-8f81-991100bb91a5] Running
	I1002 21:36:05.087621 1076218 system_pods.go:61] "storage-provisioner" [103d7e4c-8770-464d-a7a2-b5c1e18387b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:36:05.087644 1076218 system_pods.go:74] duration metric: took 179.978386ms to wait for pod list to return data ...
	I1002 21:36:05.087668 1076218 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:36:05.281087 1076218 request.go:629] Waited for 193.338715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 21:36:05.283698 1076218 default_sa.go:45] found service account: "default"
	I1002 21:36:05.283730 1076218 default_sa.go:55] duration metric: took 196.053953ms for default service account to be created ...
	I1002 21:36:05.283740 1076218 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:36:05.480127 1076218 request.go:629] Waited for 196.31645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:36:05.487098 1076218 system_pods.go:86] 8 kube-system pods found
	I1002 21:36:05.487187 1076218 system_pods.go:89] "coredns-66bff467f8-wdpfn" [a48376a9-a475-4604-8838-2c9560d26448] Running
	I1002 21:36:05.487205 1076218 system_pods.go:89] "etcd-ingress-addon-legacy-420597" [7987d4d5-277c-4454-bb83-6090418f091b] Running
	I1002 21:36:05.487213 1076218 system_pods.go:89] "kindnet-66d4c" [057d8562-8f88-4b17-afb9-929854cf0700] Running
	I1002 21:36:05.487218 1076218 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-420597" [99a47cf2-fbd2-4da4-a97e-814d4948adea] Running
	I1002 21:36:05.487224 1076218 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-420597" [0c250c82-ae25-403f-95f3-f56ba8b3ce8b] Running
	I1002 21:36:05.487237 1076218 system_pods.go:89] "kube-proxy-q6lx6" [9f132c15-d58d-44ed-9328-73421a1237fe] Running
	I1002 21:36:05.487252 1076218 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-420597" [6991590f-273e-4d80-8f81-991100bb91a5] Running
	I1002 21:36:05.487263 1076218 system_pods.go:89] "storage-provisioner" [103d7e4c-8770-464d-a7a2-b5c1e18387b5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 21:36:05.487273 1076218 system_pods.go:126] duration metric: took 203.520793ms to wait for k8s-apps to be running ...
	I1002 21:36:05.487285 1076218 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:36:05.487347 1076218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:36:05.502166 1076218 system_svc.go:56] duration metric: took 14.868926ms WaitForService to wait for kubelet.
	I1002 21:36:05.502193 1076218 kubeadm.go:581] duration metric: took 15.067227591s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 21:36:05.502214 1076218 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:36:05.680649 1076218 request.go:629] Waited for 178.305179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1002 21:36:05.685143 1076218 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:36:05.685186 1076218 node_conditions.go:123] node cpu capacity is 2
	I1002 21:36:05.685198 1076218 node_conditions.go:105] duration metric: took 182.978933ms to run NodePressure ...
	I1002 21:36:05.685224 1076218 start.go:228] waiting for startup goroutines ...
	I1002 21:36:05.685231 1076218 start.go:233] waiting for cluster config update ...
	I1002 21:36:05.685247 1076218 start.go:242] writing updated cluster config ...
	I1002 21:36:05.685556 1076218 ssh_runner.go:195] Run: rm -f paused
	I1002 21:36:05.774145 1076218 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1002 21:36:05.779627 1076218 out.go:177] 
	W1002 21:36:05.781538 1076218 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1002 21:36:05.783536 1076218 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1002 21:36:05.785529 1076218 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-420597" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.464731785Z" level=info msg="Stopped container 18f1bd6b7fd1edeb32fcfa4c4be3739e8aeabfa7ae23b55c02fb7885fc579a29: ingress-nginx/ingress-nginx-controller-7fcf777cb7-52szk/controller" id=411a0aae-9404-4b57-ad34-0f02a6d08d25 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.467916305Z" level=info msg="Stopped container 18f1bd6b7fd1edeb32fcfa4c4be3739e8aeabfa7ae23b55c02fb7885fc579a29: ingress-nginx/ingress-nginx-controller-7fcf777cb7-52szk/controller" id=c1f6d110-57ee-4da3-b4c5-3a96810c8b9a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.469594205Z" level=info msg="Stopping pod sandbox: 91e1ea37b0b8a1dc50cc0fe548bfbb4dd54d1d8b656be25e757353ee279827d3" id=acf272b3-3a92-4e54-81e5-d89076efca9c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.474425627Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-37OZZNMYKCEKBNRL - [0:0]\n:KUBE-HP-FUNVP2IDDUSBRCJ7 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-FUNVP2IDDUSBRCJ7\n-X KUBE-HP-37OZZNMYKCEKBNRL\nCOMMIT\n"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.487474844Z" level=info msg="Stopping pod sandbox: 91e1ea37b0b8a1dc50cc0fe548bfbb4dd54d1d8b656be25e757353ee279827d3" id=daaa0911-634e-4227-8839-d1c550893a8d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.487797108Z" level=info msg="Closing host port tcp:80"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.487832513Z" level=info msg="Closing host port tcp:443"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.489532042Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.489790117Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.490007470Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-52szk Namespace:ingress-nginx ID:91e1ea37b0b8a1dc50cc0fe548bfbb4dd54d1d8b656be25e757353ee279827d3 UID:a20d6c2d-27b7-46bb-a4ef-2574921acf34 NetNS:/var/run/netns/3deafc3d-11d4-42dc-a2c1-192d05b66e28 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.490152281Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-52szk from CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.517365443Z" level=info msg="Stopped pod sandbox: 91e1ea37b0b8a1dc50cc0fe548bfbb4dd54d1d8b656be25e757353ee279827d3" id=acf272b3-3a92-4e54-81e5-d89076efca9c name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.517501992Z" level=info msg="Stopped pod sandbox (already stopped): 91e1ea37b0b8a1dc50cc0fe548bfbb4dd54d1d8b656be25e757353ee279827d3" id=daaa0911-634e-4227-8839-d1c550893a8d name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.622473318Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c0319025-878a-411d-ba5b-e5518562fbd0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.622772370Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c0319025-878a-411d-ba5b-e5518562fbd0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.624517019Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e167fe7a-25ae-43d8-9430-afee00ab72b8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.624775832Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e167fe7a-25ae-43d8-9430-afee00ab72b8 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.626348478Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-96wbq/hello-world-app" id=ace30fb8-4693-4999-a015-ad71d0809354 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.626600554Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.736688708Z" level=info msg="Created container 624558df2f99b89eef66e3b174e676d0dd808db30b15e1e4fef1bb20dff82a0d: default/hello-world-app-5f5d8b66bb-96wbq/hello-world-app" id=ace30fb8-4693-4999-a015-ad71d0809354 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.737434322Z" level=info msg="Starting container: 624558df2f99b89eef66e3b174e676d0dd808db30b15e1e4fef1bb20dff82a0d" id=970a2356-feb7-4b4e-b31c-e30c7b086afc name=/runtime.v1alpha2.RuntimeService/StartContainer
	Oct 02 21:39:16 ingress-addon-legacy-420597 conmon[3841]: conmon 624558df2f99b89eef66 <ninfo>: container 3852 exited with status 1
	Oct 02 21:39:16 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:16.766646425Z" level=info msg="Started container" PID=3852 containerID=624558df2f99b89eef66e3b174e676d0dd808db30b15e1e4fef1bb20dff82a0d description=default/hello-world-app-5f5d8b66bb-96wbq/hello-world-app id=970a2356-feb7-4b4e-b31c-e30c7b086afc name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=a9bff98c07423c7c14258f7bf4fb5429e0b34e6b73242100d4c687076cacb80b
	Oct 02 21:39:17 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:17.317561302Z" level=info msg="Removing container: 3c1e2e2b6dafb4d973eb52faca98cb42812cc52c12b39ce23049e7d95d8ef68a" id=5caa337f-fa15-4215-a47b-f525b76c62c1 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 02 21:39:17 ingress-addon-legacy-420597 crio[904]: time="2023-10-02 21:39:17.343899292Z" level=info msg="Removed container 3c1e2e2b6dafb4d973eb52faca98cb42812cc52c12b39ce23049e7d95d8ef68a: default/hello-world-app-5f5d8b66bb-96wbq/hello-world-app" id=5caa337f-fa15-4215-a47b-f525b76c62c1 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	624558df2f99b       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                   5 seconds ago       Exited              hello-world-app           2                   a9bff98c07423       hello-world-app-5f5d8b66bb-96wbq
	7e354b5062b1f       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                    2 minutes ago       Running             nginx                     0                   69454b91d0e05       nginx
	18f1bd6b7fd1e       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   91e1ea37b0b8a       ingress-nginx-controller-7fcf777cb7-52szk
	8c60201f7a875       a883f7fc35610a84d589cbb450eade9face1d1a8b2cbdafa1690cbffe68cfe88                                                   3 minutes ago       Exited              patch                     1                   74bc5ca6b73b0       ingress-nginx-admission-patch-tnfg4
	cb3becea9c876       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   b8fe501ed56d2       ingress-nginx-admission-create-pt8cq
	4039485c9c403       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   043d321a2b677       storage-provisioner
	f6f6dc9abf937       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   6e9ae1025f3cd       coredns-66bff467f8-wdpfn
	312a2a860681b       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   1d9377fa83fa6       kindnet-66d4c
	2e56f6c317bf4       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   6fde695474a6b       kube-proxy-q6lx6
	b4ec8d724ac63       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   3 minutes ago       Running             kube-controller-manager   0                   b291c35394671       kube-controller-manager-ingress-addon-legacy-420597
	09b273a5d3778       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   3 minutes ago       Running             kube-scheduler            0                   f50ada9f4f63b       kube-scheduler-ingress-addon-legacy-420597
	c5e9d535669fe       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   3 minutes ago       Running             etcd                      0                   5fcdd4c0cce08       etcd-ingress-addon-legacy-420597
	7a13bdfd5d65f       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   3 minutes ago       Running             kube-apiserver            0                   77062afa4c9b5       kube-apiserver-ingress-addon-legacy-420597
	
	* 
	* ==> coredns [f6f6dc9abf93752cbc454b9b8a113834943888e0a376acd82f48d2e0f0a73e90] <==
	* [INFO] 10.244.0.5:46559 - 20328 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028956s
	[INFO] 10.244.0.5:46559 - 40599 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00002912s
	[INFO] 10.244.0.5:46559 - 56532 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033124s
	[INFO] 10.244.0.5:46559 - 33371 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004005s
	[INFO] 10.244.0.5:46559 - 7937 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00415602s
	[INFO] 10.244.0.5:46559 - 57754 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001036623s
	[INFO] 10.244.0.5:46559 - 14571 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044915s
	[INFO] 10.244.0.5:52763 - 337 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104959s
	[INFO] 10.244.0.5:42816 - 39047 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056369s
	[INFO] 10.244.0.5:52763 - 15378 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049723s
	[INFO] 10.244.0.5:42816 - 19636 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037785s
	[INFO] 10.244.0.5:52763 - 40201 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042198s
	[INFO] 10.244.0.5:52763 - 42887 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056188s
	[INFO] 10.244.0.5:42816 - 11870 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082445s
	[INFO] 10.244.0.5:52763 - 26690 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038277s
	[INFO] 10.244.0.5:42816 - 41930 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000281796s
	[INFO] 10.244.0.5:52763 - 21151 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046802s
	[INFO] 10.244.0.5:42816 - 49254 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042347s
	[INFO] 10.244.0.5:42816 - 1774 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085062s
	[INFO] 10.244.0.5:42816 - 54100 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001098432s
	[INFO] 10.244.0.5:52763 - 19420 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001338341s
	[INFO] 10.244.0.5:52763 - 25056 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001012787s
	[INFO] 10.244.0.5:42816 - 54819 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001072217s
	[INFO] 10.244.0.5:42816 - 23140 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038334s
	[INFO] 10.244.0.5:52763 - 2239 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000096188s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-420597
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-420597
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86
	                    minikube.k8s.io/name=ingress-addon-legacy-420597
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T21_35_35_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 21:35:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-420597
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 21:39:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 21:39:08 +0000   Mon, 02 Oct 2023 21:35:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 21:39:08 +0000   Mon, 02 Oct 2023 21:35:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 21:39:08 +0000   Mon, 02 Oct 2023 21:35:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 21:39:08 +0000   Mon, 02 Oct 2023 21:35:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-420597
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 85378a20d9ab41f4bb2cf762a57c3f60
	  System UUID:                72d23297-b403-4f3b-b0d8-5bbef48fb6fc
	  Boot ID:                    37d51973-0c20-4c15-81f3-7000eb353560
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-96wbq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-wdpfn                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m32s
	  kube-system                 etcd-ingress-addon-legacy-420597                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kindnet-66d4c                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m32s
	  kube-system                 kube-apiserver-ingress-addon-legacy-420597             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-420597    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-q6lx6                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-scheduler-ingress-addon-legacy-420597             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m59s (x5 over 3m59s)  kubelet     Node ingress-addon-legacy-420597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s (x5 over 3m59s)  kubelet     Node ingress-addon-legacy-420597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s (x4 over 3m59s)  kubelet     Node ingress-addon-legacy-420597 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m44s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s                  kubelet     Node ingress-addon-legacy-420597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s                  kubelet     Node ingress-addon-legacy-420597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s                  kubelet     Node ingress-addon-legacy-420597 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m30s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m24s                  kubelet     Node ingress-addon-legacy-420597 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000729] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000b7a96011
	[  +0.001048] FS-Cache: N-key=[8] '7e613b0000000000'
	[  +0.003162] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000c6b3040d
	[  +0.001031] FS-Cache: O-key=[8] '7e613b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000165fee4f
	[  +0.001045] FS-Cache: N-key=[8] '7e613b0000000000'
	[Oct 2 21:34] FS-Cache: Duplicate cookie detected
	[  +0.000705] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=0000000092679c6a
	[  +0.001107] FS-Cache: O-key=[8] '7d613b0000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=000000007e0e0088
	[  +0.001044] FS-Cache: N-key=[8] '7d613b0000000000'
	[  +0.310553] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001087] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000e895d03e
	[  +0.001082] FS-Cache: O-key=[8] '83613b0000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000734ba06c
	[  +0.001060] FS-Cache: N-key=[8] '83613b0000000000'
	[  +1.089292] 9pnet: p9_fd_create_tcp (1073420): problem connecting socket to 192.168.49.1
	
	* 
	* ==> etcd [c5e9d535669fe4058e69dfbd45214c39ddf8cc39811c707c0fd64d73fd05bf24] <==
	* 2023-10-02 21:35:25.069258 W | auth: simple token is not cryptographically signed
	2023-10-02 21:35:25.397255 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-02 21:35:25.457371 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/02 21:35:25 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-02 21:35:25.776304 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-02 21:35:25.809488 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-02 21:35:25.809671 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-02 21:35:25.809961 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/02 21:35:26 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/02 21:35:26 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/02 21:35:26 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/02 21:35:26 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/02 21:35:26 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-02 21:35:26.573755 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-02 21:35:26.665493 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-02 21:35:26.665617 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-02 21:35:26.665679 I | etcdserver: published {Name:ingress-addon-legacy-420597 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-02 21:35:26.665790 I | embed: ready to serve client requests
	2023-10-02 21:35:26.666513 I | embed: ready to serve client requests
	2023-10-02 21:35:26.701944 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 21:35:26.757317 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-02 21:35:50.766961 W | etcdserver: request "header:<ID:8128024200846562177 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bff467f8-h4b84\" mod_revision:339 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bff467f8-h4b84\" value_size:3691 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bff467f8-h4b84\" > >>" with result "size:16" took too long (221.231832ms) to execute
	2023-10-02 21:35:50.829964 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-420597\" " with result "range_response_count:1 size:6504" took too long (321.777103ms) to execute
	2023-10-02 21:35:50.870253 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-q6lx6\" " with result "range_response_count:1 size:3588" took too long (355.242462ms) to execute
	2023-10-02 21:35:50.896918 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:577" took too long (102.545033ms) to execute
	
	* 
	* ==> kernel <==
	*  21:39:22 up  4:21,  0 users,  load average: 0.25, 1.14, 2.08
	Linux ingress-addon-legacy-420597 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [312a2a860681b61094114f87b1568ab6c565b7268f7efd7f8e9776bf6fa801f4] <==
	* I1002 21:37:13.496691       1 main.go:227] handling current node
	I1002 21:37:23.499906       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:37:23.499934       1 main.go:227] handling current node
	I1002 21:37:33.503244       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:37:33.503274       1 main.go:227] handling current node
	I1002 21:37:43.516041       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:37:43.516074       1 main.go:227] handling current node
	I1002 21:37:53.519804       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:37:53.519831       1 main.go:227] handling current node
	I1002 21:38:03.522806       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:38:03.522836       1 main.go:227] handling current node
	I1002 21:38:13.528512       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:38:13.528543       1 main.go:227] handling current node
	I1002 21:38:23.531975       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:38:23.532016       1 main.go:227] handling current node
	I1002 21:38:33.535199       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:38:33.535228       1 main.go:227] handling current node
	I1002 21:38:43.538627       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:38:43.538655       1 main.go:227] handling current node
	I1002 21:38:53.550555       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:38:53.550586       1 main.go:227] handling current node
	I1002 21:39:03.559484       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:39:03.559522       1 main.go:227] handling current node
	I1002 21:39:13.572405       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1002 21:39:13.572440       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7a13bdfd5d65f9e57994df412ff6111dd0a0c16a49200bd2390151f03d1c324d] <==
	* I1002 21:35:32.240998       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 21:35:32.323001       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:35:32.323751       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1002 21:35:32.326797       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1002 21:35:32.326847       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1002 21:35:33.021897       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1002 21:35:33.021938       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1002 21:35:33.042052       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1002 21:35:33.047097       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1002 21:35:33.047187       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1002 21:35:33.488366       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:35:33.528917       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 21:35:33.599254       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 21:35:33.600283       1 controller.go:609] quota admission added evaluator for: endpoints
	I1002 21:35:33.604288       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:35:34.496895       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1002 21:35:34.984840       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1002 21:35:35.121155       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1002 21:35:38.516831       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:35:50.045647       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1002 21:35:50.195869       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1002 21:35:50.953629       1 trace.go:116] Trace[1911005072]: "Delete" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-h4b84,user-agent:kube-controller-manager/v1.18.20 (linux/arm64) kubernetes/1f3e19b/system:serviceaccount:kube-system:replicaset-controller,client:192.168.49.2 (started: 2023-10-02 21:35:50.436748285 +0000 UTC m=+25.899708917) (total time: 516.829705ms):
	Trace[1911005072]: [516.679322ms] [516.635326ms] Object deleted from database
	I1002 21:36:06.790648       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1002 21:36:35.723171       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [b4ec8d724ac6357b4e4f6113c5ee5be61eea65959d26ad715e17799e77339214] <==
	* I1002 21:35:50.249920       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W1002 21:35:50.254380       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-420597. Assuming now as a timestamp.
	I1002 21:35:50.254472       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1002 21:35:50.254125       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-420597", UID:"bef0d669-8ef1-490d-92f0-7a65a49733e2", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-420597 event: Registered Node ingress-addon-legacy-420597 in Controller
	I1002 21:35:50.278713       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"99c16796-8361-4cf6-bae5-48a953d63f65", APIVersion:"apps/v1", ResourceVersion:"229", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-66d4c
	I1002 21:35:50.330663       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1002 21:35:50.357832       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1002 21:35:50.425737       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"0e7a53a6-c687-4ae5-b467-35de515bae50", APIVersion:"apps/v1", ResourceVersion:"219", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q6lx6
	I1002 21:35:50.454201       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 21:35:50.464892       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 21:35:50.464923       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1002 21:35:50.485428       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"a5b95944-962b-43bd-b41e-8eb83e35b363", APIVersion:"apps/v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1002 21:35:50.499848       1 shared_informer.go:230] Caches are synced for resource quota 
	I1002 21:35:50.534883       1 shared_informer.go:230] Caches are synced for job 
	I1002 21:35:50.561603       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1002 21:35:50.987583       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"470e4cbf-b779-4b56-8f47-5c66a9c92818", APIVersion:"apps/v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-h4b84
	I1002 21:36:00.255151       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1002 21:36:06.764884       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"353bc1d6-6e51-4feb-8511-7fec03dfb7f3", APIVersion:"apps/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1002 21:36:06.781749       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"bfd234cc-0e00-4272-bd51-aa2eecd0a465", APIVersion:"apps/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-52szk
	I1002 21:36:06.819322       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b4bbab12-c77c-44e7-a60e-cf676ac84a5e", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-pt8cq
	I1002 21:36:06.882876       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"49c2ad59-8e68-4a90-81f6-b187554d0437", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-tnfg4
	I1002 21:36:09.960188       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b4bbab12-c77c-44e7-a60e-cf676ac84a5e", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 21:36:10.959180       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"49c2ad59-8e68-4a90-81f6-b187554d0437", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1002 21:38:55.874782       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"6e7a99e4-d820-4651-aa7e-6e130924a3e9", APIVersion:"apps/v1", ResourceVersion:"719", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1002 21:38:55.892999       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"9d8fcb5c-289d-4946-81e0-13b1dc133a49", APIVersion:"apps/v1", ResourceVersion:"720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-96wbq
	
	* 
	* ==> kube-proxy [2e56f6c317bf43781ea37f201565f6fe495efca4692ba9661f201350d722f5d2] <==
	* W1002 21:35:52.968657       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1002 21:35:52.980862       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1002 21:35:52.980999       1 server_others.go:186] Using iptables Proxier.
	I1002 21:35:52.981418       1 server.go:583] Version: v1.18.20
	I1002 21:35:52.986898       1 config.go:315] Starting service config controller
	I1002 21:35:52.986929       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1002 21:35:52.986983       1 config.go:133] Starting endpoints config controller
	I1002 21:35:52.986988       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1002 21:35:53.087095       1 shared_informer.go:230] Caches are synced for service config 
	I1002 21:35:53.087097       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [09b273a5d3778eeee01e53a0a12d20d0c066d02873485b5796631021d7821136] <==
	* I1002 21:35:32.241601       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 21:35:32.241653       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1002 21:35:32.244837       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1002 21:35:32.245675       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 21:35:32.245819       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 21:35:32.245889       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1002 21:35:32.255436       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 21:35:32.255615       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 21:35:32.255937       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 21:35:32.259869       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 21:35:32.260038       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 21:35:32.260103       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 21:35:32.260162       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 21:35:32.260216       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 21:35:32.260276       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 21:35:32.260334       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 21:35:32.260387       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 21:35:32.260448       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 21:35:33.074992       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 21:35:33.074992       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 21:35:33.081795       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 21:35:33.308242       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1002 21:35:33.846010       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1002 21:35:50.212624       1 factory.go:503] pod: kube-system/coredns-66bff467f8-h4b84 is already present in unschedulable queue
	E1002 21:35:50.373511       1 factory.go:503] pod: kube-system/coredns-66bff467f8-wdpfn is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 02 21:39:00 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:00.286106    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 619e97d45fe83acb17d200a8ce219fac388a4b9d5261de5ded600a161ac11350
	Oct 02 21:39:00 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:00.286399    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c1e2e2b6dafb4d973eb52faca98cb42812cc52c12b39ce23049e7d95d8ef68a
	Oct 02 21:39:00 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:00.286728    1660 pod_workers.go:191] Error syncing pod e4a69edc-a5cd-4252-92fd-2b34afaf1405 ("hello-world-app-5f5d8b66bb-96wbq_default(e4a69edc-a5cd-4252-92fd-2b34afaf1405)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-96wbq_default(e4a69edc-a5cd-4252-92fd-2b34afaf1405)"
	Oct 02 21:39:01 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:01.289187    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c1e2e2b6dafb4d973eb52faca98cb42812cc52c12b39ce23049e7d95d8ef68a
	Oct 02 21:39:01 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:01.289666    1660 pod_workers.go:191] Error syncing pod e4a69edc-a5cd-4252-92fd-2b34afaf1405 ("hello-world-app-5f5d8b66bb-96wbq_default(e4a69edc-a5cd-4252-92fd-2b34afaf1405)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-96wbq_default(e4a69edc-a5cd-4252-92fd-2b34afaf1405)"
	Oct 02 21:39:05 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:05.623060    1660 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 02 21:39:05 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:05.623106    1660 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 02 21:39:05 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:05.623160    1660 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 02 21:39:05 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:05.623195    1660 pod_workers.go:191] Error syncing pod 8742597d-3068-4560-9dfc-f985876d4809 ("kube-ingress-dns-minikube_kube-system(8742597d-3068-4560-9dfc-f985876d4809)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 02 21:39:11 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:11.870888    1660 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-77vvr" (UniqueName: "kubernetes.io/secret/8742597d-3068-4560-9dfc-f985876d4809-minikube-ingress-dns-token-77vvr") pod "8742597d-3068-4560-9dfc-f985876d4809" (UID: "8742597d-3068-4560-9dfc-f985876d4809")
	Oct 02 21:39:11 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:11.875829    1660 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8742597d-3068-4560-9dfc-f985876d4809-minikube-ingress-dns-token-77vvr" (OuterVolumeSpecName: "minikube-ingress-dns-token-77vvr") pod "8742597d-3068-4560-9dfc-f985876d4809" (UID: "8742597d-3068-4560-9dfc-f985876d4809"). InnerVolumeSpecName "minikube-ingress-dns-token-77vvr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 21:39:11 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:11.971308    1660 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-77vvr" (UniqueName: "kubernetes.io/secret/8742597d-3068-4560-9dfc-f985876d4809-minikube-ingress-dns-token-77vvr") on node "ingress-addon-legacy-420597" DevicePath ""
	Oct 02 21:39:14 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:14.264371    1660 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-52szk.178a682d7e65f9d8", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-52szk", UID:"a20d6c2d-27b7-46bb-a4ef-2574921acf34", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-420597"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13eec008f9b65d8, ext:219311792772, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13eec008f9b65d8, ext:219311792772, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-52szk.178a682d7e65f9d8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 21:39:14 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:14.279746    1660 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-52szk.178a682d7e65f9d8", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-52szk", UID:"a20d6c2d-27b7-46bb-a4ef-2574921acf34", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-420597"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13eec008f9b65d8, ext:219311792772, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13eec00904a4bf4, ext:219323254936, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-52szk.178a682d7e65f9d8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 02 21:39:16 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:16.622055    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c1e2e2b6dafb4d973eb52faca98cb42812cc52c12b39ce23049e7d95d8ef68a
	Oct 02 21:39:17 ingress-addon-legacy-420597 kubelet[1660]: W1002 21:39:17.314218    1660 pod_container_deletor.go:77] Container "91e1ea37b0b8a1dc50cc0fe548bfbb4dd54d1d8b656be25e757353ee279827d3" not found in pod's containers
	Oct 02 21:39:17 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:17.315633    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 3c1e2e2b6dafb4d973eb52faca98cb42812cc52c12b39ce23049e7d95d8ef68a
	Oct 02 21:39:17 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:17.315834    1660 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 624558df2f99b89eef66e3b174e676d0dd808db30b15e1e4fef1bb20dff82a0d
	Oct 02 21:39:17 ingress-addon-legacy-420597 kubelet[1660]: E1002 21:39:17.316063    1660 pod_workers.go:191] Error syncing pod e4a69edc-a5cd-4252-92fd-2b34afaf1405 ("hello-world-app-5f5d8b66bb-96wbq_default(e4a69edc-a5cd-4252-92fd-2b34afaf1405)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-96wbq_default(e4a69edc-a5cd-4252-92fd-2b34afaf1405)"
	Oct 02 21:39:18 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:18.386476    1660 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a20d6c2d-27b7-46bb-a4ef-2574921acf34-webhook-cert") pod "a20d6c2d-27b7-46bb-a4ef-2574921acf34" (UID: "a20d6c2d-27b7-46bb-a4ef-2574921acf34")
	Oct 02 21:39:18 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:18.386525    1660 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-r265s" (UniqueName: "kubernetes.io/secret/a20d6c2d-27b7-46bb-a4ef-2574921acf34-ingress-nginx-token-r265s") pod "a20d6c2d-27b7-46bb-a4ef-2574921acf34" (UID: "a20d6c2d-27b7-46bb-a4ef-2574921acf34")
	Oct 02 21:39:18 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:18.392546    1660 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20d6c2d-27b7-46bb-a4ef-2574921acf34-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a20d6c2d-27b7-46bb-a4ef-2574921acf34" (UID: "a20d6c2d-27b7-46bb-a4ef-2574921acf34"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 21:39:18 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:18.393734    1660 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20d6c2d-27b7-46bb-a4ef-2574921acf34-ingress-nginx-token-r265s" (OuterVolumeSpecName: "ingress-nginx-token-r265s") pod "a20d6c2d-27b7-46bb-a4ef-2574921acf34" (UID: "a20d6c2d-27b7-46bb-a4ef-2574921acf34"). InnerVolumeSpecName "ingress-nginx-token-r265s". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 02 21:39:18 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:18.486866    1660 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/a20d6c2d-27b7-46bb-a4ef-2574921acf34-webhook-cert") on node "ingress-addon-legacy-420597" DevicePath ""
	Oct 02 21:39:18 ingress-addon-legacy-420597 kubelet[1660]: I1002 21:39:18.486910    1660 reconciler.go:319] Volume detached for volume "ingress-nginx-token-r265s" (UniqueName: "kubernetes.io/secret/a20d6c2d-27b7-46bb-a4ef-2574921acf34-ingress-nginx-token-r265s") on node "ingress-addon-legacy-420597" DevicePath ""
	
	* 
	* ==> storage-provisioner [4039485c9c40318f70e1056ae85a9c49fcbd7fb618408c95fddaf68dd940ce5e] <==
	* I1002 21:36:06.292778       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 21:36:06.312993       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 21:36:06.314343       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 21:36:06.324086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 21:36:06.324331       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-420597_a4472c9d-82c4-46f7-afa2-2e1e9458d1fe!
	I1002 21:36:06.326276       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"25fdabd5-5528-41f1-9192-414a6241f22a", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-420597_a4472c9d-82c4-46f7-afa2-2e1e9458d1fe became leader
	I1002 21:36:06.425408       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-420597_a4472c9d-82c4-46f7-afa2-2e1e9458d1fe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-420597 -n ingress-addon-legacy-420597
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-420597 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (184.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-rpjdg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-rpjdg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-rpjdg -- sh -c "ping -c 1 192.168.58.1": exit status 1 (241.600613ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-rpjdg): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-wcgsg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-wcgsg -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-wcgsg -- sh -c "ping -c 1 192.168.58.1": exit status 1 (243.461028ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-wcgsg): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-629060
helpers_test.go:235: (dbg) docker inspect multinode-629060:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33",
	        "Created": "2023-10-02T21:45:38.109919076Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1113159,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T21:45:38.468231607Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/hostname",
	        "HostsPath": "/var/lib/docker/containers/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/hosts",
	        "LogPath": "/var/lib/docker/containers/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33-json.log",
	        "Name": "/multinode-629060",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-629060:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-629060",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ba033feabbdc65114bc96eac6c7019bc1d577b8328c198c9e0c3d11615117997-init/diff:/var/lib/docker/overlay2/211b77e87812a1edc3010e11f8a4e888a425a4aebe773b65e967cb7beecedbef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ba033feabbdc65114bc96eac6c7019bc1d577b8328c198c9e0c3d11615117997/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ba033feabbdc65114bc96eac6c7019bc1d577b8328c198c9e0c3d11615117997/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ba033feabbdc65114bc96eac6c7019bc1d577b8328c198c9e0c3d11615117997/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-629060",
	                "Source": "/var/lib/docker/volumes/multinode-629060/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-629060",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-629060",
	                "name.minikube.sigs.k8s.io": "multinode-629060",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09d55a72208ea72d34e0f6955848d13e1eb2f9fd0931cf60e99353ab13260ea9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/09d55a72208e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-629060": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a49cd6d49abe",
	                        "multinode-629060"
	                    ],
	                    "NetworkID": "6756a4f4c6898eaaec18ce106fa74c55af715a6ce6efa5dbf3ec55f50bd7c0d7",
	                    "EndpointID": "29c3841b3169a38d4a9e27a32e8832b6b82e80470e6ac7c83b08bbc54021a60d",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-629060 -n multinode-629060
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-629060 logs -n 25: (1.705324489s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-972057                           | mount-start-2-972057 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-972057 ssh -- ls                    | mount-start-2-972057 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-970317                           | mount-start-1-970317 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-972057 ssh -- ls                    | mount-start-2-972057 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-972057                           | mount-start-2-972057 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	| start   | -p mount-start-2-972057                           | mount-start-2-972057 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	| ssh     | mount-start-2-972057 ssh -- ls                    | mount-start-2-972057 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-972057                           | mount-start-2-972057 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	| delete  | -p mount-start-1-970317                           | mount-start-1-970317 | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:45 UTC |
	| start   | -p multinode-629060                               | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:45 UTC | 02 Oct 23 21:47 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- apply -f                   | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- rollout                    | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- get pods -o                | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- get pods -o                | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-rpjdg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-wcgsg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-rpjdg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-wcgsg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-rpjdg -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-wcgsg -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- get pods -o                | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-rpjdg                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC |                     |
	|         | busybox-5bc68d56bd-rpjdg -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC | 02 Oct 23 21:47 UTC |
	|         | busybox-5bc68d56bd-wcgsg                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-629060 -- exec                       | multinode-629060     | jenkins | v1.31.2 | 02 Oct 23 21:47 UTC |                     |
	|         | busybox-5bc68d56bd-wcgsg -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 21:45:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:45:32.691232 1112716 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:45:32.691371 1112716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:45:32.691376 1112716 out.go:309] Setting ErrFile to fd 2...
	I1002 21:45:32.691381 1112716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:45:32.691619 1112716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:45:32.692033 1112716 out.go:303] Setting JSON to false
	I1002 21:45:32.693083 1112716 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16080,"bootTime":1696267053,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:45:32.693162 1112716 start.go:138] virtualization:  
	I1002 21:45:32.695857 1112716 out.go:177] * [multinode-629060] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:45:32.697856 1112716 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 21:45:32.699803 1112716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:45:32.698014 1112716 notify.go:220] Checking for updates...
	I1002 21:45:32.703893 1112716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:45:32.705941 1112716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:45:32.707866 1112716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:45:32.710020 1112716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:45:32.712301 1112716 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:45:32.737876 1112716 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:45:32.737984 1112716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:45:32.833492 1112716 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 21:45:32.82345462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:45:32.833603 1112716 docker.go:294] overlay module found
	I1002 21:45:32.836716 1112716 out.go:177] * Using the docker driver based on user configuration
	I1002 21:45:32.838612 1112716 start.go:298] selected driver: docker
	I1002 21:45:32.838636 1112716 start.go:902] validating driver "docker" against <nil>
	I1002 21:45:32.838650 1112716 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:45:32.839295 1112716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:45:32.919210 1112716 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-02 21:45:32.909889733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:45:32.919396 1112716 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 21:45:32.919614 1112716 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:45:32.921594 1112716 out.go:177] * Using Docker driver with root privileges
	I1002 21:45:32.923151 1112716 cni.go:84] Creating CNI manager for ""
	I1002 21:45:32.923171 1112716 cni.go:136] 0 nodes found, recommending kindnet
	I1002 21:45:32.923182 1112716 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:45:32.923199 1112716 start_flags.go:321] config:
	{Name:multinode-629060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-629060 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:45:32.925310 1112716 out.go:177] * Starting control plane node multinode-629060 in cluster multinode-629060
	I1002 21:45:32.927266 1112716 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:45:32.929317 1112716 out.go:177] * Pulling base image ...
	I1002 21:45:32.931202 1112716 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:45:32.931272 1112716 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 21:45:32.931286 1112716 cache.go:57] Caching tarball of preloaded images
	I1002 21:45:32.931363 1112716 preload.go:174] Found /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:45:32.931377 1112716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 21:45:32.931737 1112716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/config.json ...
	I1002 21:45:32.931765 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/config.json: {Name:mk207f27cbeb9bd917d91c38f7e91ff29aed1409 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:32.931925 1112716 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 21:45:32.949551 1112716 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 21:45:32.949574 1112716 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 21:45:32.949595 1112716 cache.go:195] Successfully downloaded all kic artifacts
	I1002 21:45:32.949660 1112716 start.go:365] acquiring machines lock for multinode-629060: {Name:mkc13c259deebcd21db2dbf7c298496440ac0809 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:45:32.949780 1112716 start.go:369] acquired machines lock for "multinode-629060" in 100.75µs
	I1002 21:45:32.949804 1112716 start.go:93] Provisioning new machine with config: &{Name:multinode-629060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-629060 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:45:32.949892 1112716 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:45:32.952259 1112716 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 21:45:32.952535 1112716 start.go:159] libmachine.API.Create for "multinode-629060" (driver="docker")
	I1002 21:45:32.952580 1112716 client.go:168] LocalClient.Create starting
	I1002 21:45:32.952652 1112716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem
	I1002 21:45:32.952691 1112716 main.go:141] libmachine: Decoding PEM data...
	I1002 21:45:32.952710 1112716 main.go:141] libmachine: Parsing certificate...
	I1002 21:45:32.952767 1112716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem
	I1002 21:45:32.952793 1112716 main.go:141] libmachine: Decoding PEM data...
	I1002 21:45:32.952816 1112716 main.go:141] libmachine: Parsing certificate...
	I1002 21:45:32.953173 1112716 cli_runner.go:164] Run: docker network inspect multinode-629060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:45:32.970248 1112716 cli_runner.go:211] docker network inspect multinode-629060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:45:32.970334 1112716 network_create.go:281] running [docker network inspect multinode-629060] to gather additional debugging logs...
	I1002 21:45:32.970354 1112716 cli_runner.go:164] Run: docker network inspect multinode-629060
	W1002 21:45:32.987028 1112716 cli_runner.go:211] docker network inspect multinode-629060 returned with exit code 1
	I1002 21:45:32.987058 1112716 network_create.go:284] error running [docker network inspect multinode-629060]: docker network inspect multinode-629060: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-629060 not found
	I1002 21:45:32.987081 1112716 network_create.go:286] output of [docker network inspect multinode-629060]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-629060 not found
	
	** /stderr **
	I1002 21:45:32.987200 1112716 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:45:33.008920 1112716 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e0177270a4f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ae:54:66:71} reservation:<nil>}
	I1002 21:45:33.009343 1112716 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000ee4800}
	I1002 21:45:33.009368 1112716 network_create.go:124] attempt to create docker network multinode-629060 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1002 21:45:33.009427 1112716 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-629060 multinode-629060
	I1002 21:45:33.085468 1112716 network_create.go:108] docker network multinode-629060 192.168.58.0/24 created
	I1002 21:45:33.085505 1112716 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-629060" container
	I1002 21:45:33.085584 1112716 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:45:33.103511 1112716 cli_runner.go:164] Run: docker volume create multinode-629060 --label name.minikube.sigs.k8s.io=multinode-629060 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:45:33.123498 1112716 oci.go:103] Successfully created a docker volume multinode-629060
	I1002 21:45:33.123600 1112716 cli_runner.go:164] Run: docker run --rm --name multinode-629060-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-629060 --entrypoint /usr/bin/test -v multinode-629060:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 21:45:33.672924 1112716 oci.go:107] Successfully prepared a docker volume multinode-629060
	I1002 21:45:33.672970 1112716 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:45:33.672992 1112716 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 21:45:33.673082 1112716 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-629060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:45:38.025731 1112716 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-629060:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.35259765s)
	I1002 21:45:38.025782 1112716 kic.go:199] duration metric: took 4.352787 seconds to extract preloaded images to volume
	W1002 21:45:38.025992 1112716 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:45:38.026141 1112716 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:45:38.093039 1112716 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-629060 --name multinode-629060 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-629060 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-629060 --network multinode-629060 --ip 192.168.58.2 --volume multinode-629060:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 21:45:38.476939 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Running}}
	I1002 21:45:38.503948 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:45:38.531461 1112716 cli_runner.go:164] Run: docker exec multinode-629060 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:45:38.632328 1112716 oci.go:144] the created container "multinode-629060" has a running status.
	I1002 21:45:38.632359 1112716 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa...
	I1002 21:45:38.919535 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:45:38.919627 1112716 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:45:38.949534 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:45:38.976869 1112716 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:45:38.976889 1112716 kic_runner.go:114] Args: [docker exec --privileged multinode-629060 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:45:39.078431 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:45:39.125301 1112716 machine.go:88] provisioning docker machine ...
	I1002 21:45:39.125357 1112716 ubuntu.go:169] provisioning hostname "multinode-629060"
	I1002 21:45:39.125453 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:39.170955 1112716 main.go:141] libmachine: Using SSH client type: native
	I1002 21:45:39.171590 1112716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I1002 21:45:39.171617 1112716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629060 && echo "multinode-629060" | sudo tee /etc/hostname
	I1002 21:45:39.172603 1112716 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59526->127.0.0.1:33810: read: connection reset by peer
	I1002 21:45:42.345376 1112716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629060
	
	I1002 21:45:42.345480 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:42.365455 1112716 main.go:141] libmachine: Using SSH client type: native
	I1002 21:45:42.365871 1112716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I1002 21:45:42.365895 1112716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629060' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629060/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629060' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:45:42.507082 1112716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:45:42.507109 1112716 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 21:45:42.507146 1112716 ubuntu.go:177] setting up certificates
	I1002 21:45:42.507156 1112716 provision.go:83] configureAuth start
	I1002 21:45:42.507227 1112716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060
	I1002 21:45:42.526807 1112716 provision.go:138] copyHostCerts
	I1002 21:45:42.526857 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 21:45:42.526892 1112716 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 21:45:42.526908 1112716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 21:45:42.526989 1112716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 21:45:42.527073 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 21:45:42.527098 1112716 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 21:45:42.527103 1112716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 21:45:42.527129 1112716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 21:45:42.527176 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 21:45:42.527196 1112716 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 21:45:42.527200 1112716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 21:45:42.527224 1112716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 21:45:42.527276 1112716 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.multinode-629060 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-629060]
	I1002 21:45:42.900920 1112716 provision.go:172] copyRemoteCerts
	I1002 21:45:42.901000 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:45:42.901045 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:42.920451 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:45:43.024126 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:45:43.024185 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:45:43.052454 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:45:43.052560 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1002 21:45:43.080625 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:45:43.080688 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:45:43.109637 1112716 provision.go:86] duration metric: configureAuth took 602.467019ms
	I1002 21:45:43.109661 1112716 ubuntu.go:193] setting minikube options for container-runtime
	I1002 21:45:43.109851 1112716 config.go:182] Loaded profile config "multinode-629060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:45:43.109953 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:43.131070 1112716 main.go:141] libmachine: Using SSH client type: native
	I1002 21:45:43.131489 1112716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33810 <nil> <nil>}
	I1002 21:45:43.131504 1112716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:45:43.392578 1112716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:45:43.392605 1112716 machine.go:91] provisioned docker machine in 4.267262128s
	I1002 21:45:43.392615 1112716 client.go:171] LocalClient.Create took 10.440021363s
	I1002 21:45:43.392631 1112716 start.go:167] duration metric: libmachine.API.Create for "multinode-629060" took 10.440097391s
	I1002 21:45:43.392648 1112716 start.go:300] post-start starting for "multinode-629060" (driver="docker")
	I1002 21:45:43.392658 1112716 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:45:43.392739 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:45:43.392786 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:43.411708 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:45:43.512490 1112716 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:45:43.516519 1112716 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 21:45:43.516539 1112716 command_runner.go:130] > NAME="Ubuntu"
	I1002 21:45:43.516545 1112716 command_runner.go:130] > VERSION_ID="22.04"
	I1002 21:45:43.516552 1112716 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 21:45:43.516558 1112716 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 21:45:43.516562 1112716 command_runner.go:130] > ID=ubuntu
	I1002 21:45:43.516567 1112716 command_runner.go:130] > ID_LIKE=debian
	I1002 21:45:43.516573 1112716 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 21:45:43.516579 1112716 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 21:45:43.516586 1112716 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 21:45:43.516595 1112716 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 21:45:43.516600 1112716 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 21:45:43.516665 1112716 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:45:43.516689 1112716 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 21:45:43.516699 1112716 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 21:45:43.516706 1112716 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 21:45:43.516717 1112716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 21:45:43.516775 1112716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 21:45:43.516853 1112716 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 21:45:43.516860 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> /etc/ssl/certs/10477322.pem
	I1002 21:45:43.516955 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:45:43.527522 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:45:43.555642 1112716 start.go:303] post-start completed in 162.977173ms
	I1002 21:45:43.556070 1112716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060
	I1002 21:45:43.573830 1112716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/config.json ...
	I1002 21:45:43.574118 1112716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:45:43.574178 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:43.592899 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:45:43.691473 1112716 command_runner.go:130] > 11%!
	(MISSING)I1002 21:45:43.691561 1112716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:45:43.697096 1112716 command_runner.go:130] > 174G
	I1002 21:45:43.697610 1112716 start.go:128] duration metric: createHost completed in 10.747704868s
	I1002 21:45:43.697633 1112716 start.go:83] releasing machines lock for "multinode-629060", held for 10.747844256s
	I1002 21:45:43.697708 1112716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060
	I1002 21:45:43.715031 1112716 ssh_runner.go:195] Run: cat /version.json
	I1002 21:45:43.715084 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:43.715094 1112716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:45:43.715152 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:45:43.733298 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:45:43.738634 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:45:43.960025 1112716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 21:45:43.960075 1112716 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "c590c2ca0a7db48c4b84c041c2699711a39ab56a"}
	I1002 21:45:43.960215 1112716 ssh_runner.go:195] Run: systemctl --version
	I1002 21:45:43.965276 1112716 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1002 21:45:43.965309 1112716 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 21:45:43.965598 1112716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:45:44.114617 1112716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 21:45:44.120202 1112716 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 21:45:44.120227 1112716 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1002 21:45:44.120236 1112716 command_runner.go:130] > Device: 36h/54d	Inode: 1568809     Links: 1
	I1002 21:45:44.120244 1112716 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 21:45:44.120253 1112716 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1002 21:45:44.120259 1112716 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1002 21:45:44.120269 1112716 command_runner.go:130] > Change: 2023-10-02 21:23:08.253134165 +0000
	I1002 21:45:44.120276 1112716 command_runner.go:130] >  Birth: 2023-10-02 21:23:08.253134165 +0000
	I1002 21:45:44.120721 1112716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:45:44.146883 1112716 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 21:45:44.147014 1112716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:45:44.185290 1112716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1002 21:45:44.185321 1112716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 21:45:44.185329 1112716 start.go:469] detecting cgroup driver to use...
	I1002 21:45:44.185380 1112716 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 21:45:44.185454 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:45:44.205515 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:45:44.219921 1112716 docker.go:197] disabling cri-docker service (if available) ...
	I1002 21:45:44.220027 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:45:44.236627 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:45:44.254051 1112716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:45:44.360950 1112716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:45:44.467672 1112716 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1002 21:45:44.467741 1112716 docker.go:213] disabling docker service ...
	I1002 21:45:44.467803 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:45:44.488639 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:45:44.506973 1112716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:45:44.605399 1112716 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1002 21:45:44.605501 1112716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:45:44.721652 1112716 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1002 21:45:44.721963 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:45:44.735572 1112716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:45:44.755569 1112716 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 21:45:44.757573 1112716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 21:45:44.757688 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:44.771799 1112716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:45:44.771915 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:44.784948 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:44.797328 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:44.809818 1112716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:45:44.822064 1112716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:45:44.832207 1112716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 21:45:44.833483 1112716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:45:44.844102 1112716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:45:44.946821 1112716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:45:45.082221 1112716 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:45:45.082317 1112716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:45:45.088626 1112716 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 21:45:45.088652 1112716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 21:45:45.088661 1112716 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1002 21:45:45.088669 1112716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 21:45:45.088676 1112716 command_runner.go:130] > Access: 2023-10-02 21:45:45.063234608 +0000
	I1002 21:45:45.088683 1112716 command_runner.go:130] > Modify: 2023-10-02 21:45:45.063234608 +0000
	I1002 21:45:45.088689 1112716 command_runner.go:130] > Change: 2023-10-02 21:45:45.063234608 +0000
	I1002 21:45:45.088699 1112716 command_runner.go:130] >  Birth: -
	I1002 21:45:45.088956 1112716 start.go:537] Will wait 60s for crictl version
	I1002 21:45:45.089043 1112716 ssh_runner.go:195] Run: which crictl
	I1002 21:45:45.094789 1112716 command_runner.go:130] > /usr/bin/crictl
	I1002 21:45:45.095168 1112716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:45:45.144479 1112716 command_runner.go:130] > Version:  0.1.0
	I1002 21:45:45.144514 1112716 command_runner.go:130] > RuntimeName:  cri-o
	I1002 21:45:45.144520 1112716 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1002 21:45:45.144527 1112716 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 21:45:45.147600 1112716 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 21:45:45.147721 1112716 ssh_runner.go:195] Run: crio --version
	I1002 21:45:45.199448 1112716 command_runner.go:130] > crio version 1.24.6
	I1002 21:45:45.199472 1112716 command_runner.go:130] > Version:          1.24.6
	I1002 21:45:45.199482 1112716 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 21:45:45.199488 1112716 command_runner.go:130] > GitTreeState:     clean
	I1002 21:45:45.199495 1112716 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 21:45:45.199502 1112716 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 21:45:45.199507 1112716 command_runner.go:130] > Compiler:         gc
	I1002 21:45:45.199513 1112716 command_runner.go:130] > Platform:         linux/arm64
	I1002 21:45:45.199520 1112716 command_runner.go:130] > Linkmode:         dynamic
	I1002 21:45:45.199533 1112716 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 21:45:45.199540 1112716 command_runner.go:130] > SeccompEnabled:   true
	I1002 21:45:45.199546 1112716 command_runner.go:130] > AppArmorEnabled:  false
	I1002 21:45:45.201898 1112716 ssh_runner.go:195] Run: crio --version
	I1002 21:45:45.257796 1112716 command_runner.go:130] > crio version 1.24.6
	I1002 21:45:45.257822 1112716 command_runner.go:130] > Version:          1.24.6
	I1002 21:45:45.257834 1112716 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 21:45:45.257847 1112716 command_runner.go:130] > GitTreeState:     clean
	I1002 21:45:45.257858 1112716 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 21:45:45.257874 1112716 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 21:45:45.257895 1112716 command_runner.go:130] > Compiler:         gc
	I1002 21:45:45.257901 1112716 command_runner.go:130] > Platform:         linux/arm64
	I1002 21:45:45.257917 1112716 command_runner.go:130] > Linkmode:         dynamic
	I1002 21:45:45.257943 1112716 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 21:45:45.257956 1112716 command_runner.go:130] > SeccompEnabled:   true
	I1002 21:45:45.257968 1112716 command_runner.go:130] > AppArmorEnabled:  false
	I1002 21:45:45.264710 1112716 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 21:45:45.267413 1112716 cli_runner.go:164] Run: docker network inspect multinode-629060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:45:45.288858 1112716 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 21:45:45.294869 1112716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:45:45.310155 1112716 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:45:45.310224 1112716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:45:45.378546 1112716 command_runner.go:130] > {
	I1002 21:45:45.378569 1112716 command_runner.go:130] >   "images": [
	I1002 21:45:45.378574 1112716 command_runner.go:130] >     {
	I1002 21:45:45.378584 1112716 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1002 21:45:45.378590 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.378600 1112716 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1002 21:45:45.378604 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378610 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.378622 1112716 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1002 21:45:45.378631 1112716 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1002 21:45:45.378635 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378641 1112716 command_runner.go:130] >       "size": "60867618",
	I1002 21:45:45.378646 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.378651 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.378659 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.378664 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.378668 1112716 command_runner.go:130] >     },
	I1002 21:45:45.378673 1112716 command_runner.go:130] >     {
	I1002 21:45:45.378702 1112716 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1002 21:45:45.378708 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.378714 1112716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 21:45:45.378720 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378725 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.378735 1112716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1002 21:45:45.378745 1112716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1002 21:45:45.378750 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378757 1112716 command_runner.go:130] >       "size": "29037500",
	I1002 21:45:45.378762 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.378767 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.378772 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.378777 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.378782 1112716 command_runner.go:130] >     },
	I1002 21:45:45.378786 1112716 command_runner.go:130] >     {
	I1002 21:45:45.378794 1112716 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1002 21:45:45.378799 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.378805 1112716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1002 21:45:45.378809 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378815 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.378824 1112716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1002 21:45:45.378833 1112716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1002 21:45:45.378838 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378844 1112716 command_runner.go:130] >       "size": "51393451",
	I1002 21:45:45.378849 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.378854 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.378858 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.378866 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.378870 1112716 command_runner.go:130] >     },
	I1002 21:45:45.378874 1112716 command_runner.go:130] >     {
	I1002 21:45:45.378882 1112716 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1002 21:45:45.378887 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.378893 1112716 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1002 21:45:45.378897 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378902 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.378911 1112716 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1002 21:45:45.378919 1112716 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1002 21:45:45.378926 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378931 1112716 command_runner.go:130] >       "size": "182203183",
	I1002 21:45:45.378936 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.378941 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.378945 1112716 command_runner.go:130] >       },
	I1002 21:45:45.378950 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.378955 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.378960 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.378964 1112716 command_runner.go:130] >     },
	I1002 21:45:45.378969 1112716 command_runner.go:130] >     {
	I1002 21:45:45.378976 1112716 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1002 21:45:45.378981 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.378988 1112716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1002 21:45:45.378992 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.378998 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.379007 1112716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1002 21:45:45.379017 1112716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1002 21:45:45.379021 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379026 1112716 command_runner.go:130] >       "size": "121054158",
	I1002 21:45:45.379031 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.379036 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.379040 1112716 command_runner.go:130] >       },
	I1002 21:45:45.379046 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.379051 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.379056 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.379060 1112716 command_runner.go:130] >     },
	I1002 21:45:45.379065 1112716 command_runner.go:130] >     {
	I1002 21:45:45.379073 1112716 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1002 21:45:45.379078 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.379084 1112716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1002 21:45:45.379089 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379093 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.379103 1112716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1002 21:45:45.379113 1112716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1002 21:45:45.379117 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379123 1112716 command_runner.go:130] >       "size": "117187380",
	I1002 21:45:45.379127 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.379133 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.379138 1112716 command_runner.go:130] >       },
	I1002 21:45:45.379143 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.379148 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.379153 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.379157 1112716 command_runner.go:130] >     },
	I1002 21:45:45.379162 1112716 command_runner.go:130] >     {
	I1002 21:45:45.379169 1112716 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1002 21:45:45.379174 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.379180 1112716 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1002 21:45:45.379185 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379190 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.379199 1112716 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1002 21:45:45.379208 1112716 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1002 21:45:45.379213 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379218 1112716 command_runner.go:130] >       "size": "69926807",
	I1002 21:45:45.379223 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.379227 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.379233 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.379238 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.379242 1112716 command_runner.go:130] >     },
	I1002 21:45:45.379247 1112716 command_runner.go:130] >     {
	I1002 21:45:45.379257 1112716 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1002 21:45:45.379261 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.379268 1112716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1002 21:45:45.379272 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379277 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.379320 1112716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1002 21:45:45.379330 1112716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1002 21:45:45.379335 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379340 1112716 command_runner.go:130] >       "size": "59188020",
	I1002 21:45:45.379345 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.379351 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.379356 1112716 command_runner.go:130] >       },
	I1002 21:45:45.379361 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.379366 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.379371 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.379375 1112716 command_runner.go:130] >     },
	I1002 21:45:45.379380 1112716 command_runner.go:130] >     {
	I1002 21:45:45.379387 1112716 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1002 21:45:45.379393 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.379399 1112716 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1002 21:45:45.379403 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379409 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.379418 1112716 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1002 21:45:45.379427 1112716 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1002 21:45:45.379432 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.379437 1112716 command_runner.go:130] >       "size": "520014",
	I1002 21:45:45.379442 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.379447 1112716 command_runner.go:130] >         "value": "65535"
	I1002 21:45:45.379452 1112716 command_runner.go:130] >       },
	I1002 21:45:45.379457 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.379462 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.379467 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.379471 1112716 command_runner.go:130] >     }
	I1002 21:45:45.379476 1112716 command_runner.go:130] >   ]
	I1002 21:45:45.379481 1112716 command_runner.go:130] > }
	I1002 21:45:45.381802 1112716 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 21:45:45.381880 1112716 crio.go:415] Images already preloaded, skipping extraction
	I1002 21:45:45.381986 1112716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:45:45.423292 1112716 command_runner.go:130] > {
	I1002 21:45:45.423355 1112716 command_runner.go:130] >   "images": [
	I1002 21:45:45.423375 1112716 command_runner.go:130] >     {
	I1002 21:45:45.423397 1112716 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1002 21:45:45.423404 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.423413 1112716 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1002 21:45:45.423421 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423426 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.423436 1112716 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1002 21:45:45.423446 1112716 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1002 21:45:45.423453 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423460 1112716 command_runner.go:130] >       "size": "60867618",
	I1002 21:45:45.423468 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.423473 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.423483 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.423488 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.423493 1112716 command_runner.go:130] >     },
	I1002 21:45:45.423500 1112716 command_runner.go:130] >     {
	I1002 21:45:45.423508 1112716 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1002 21:45:45.423516 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.423523 1112716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 21:45:45.423527 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423533 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.423542 1112716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1002 21:45:45.423552 1112716 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1002 21:45:45.423557 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423565 1112716 command_runner.go:130] >       "size": "29037500",
	I1002 21:45:45.423570 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.423575 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.423580 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.423584 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.423589 1112716 command_runner.go:130] >     },
	I1002 21:45:45.423593 1112716 command_runner.go:130] >     {
	I1002 21:45:45.423601 1112716 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1002 21:45:45.423606 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.423615 1112716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1002 21:45:45.423620 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423625 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.423637 1112716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1002 21:45:45.423649 1112716 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1002 21:45:45.423656 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423661 1112716 command_runner.go:130] >       "size": "51393451",
	I1002 21:45:45.423666 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.423671 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.423679 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.423684 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.423688 1112716 command_runner.go:130] >     },
	I1002 21:45:45.423694 1112716 command_runner.go:130] >     {
	I1002 21:45:45.423701 1112716 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1002 21:45:45.423708 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.423715 1112716 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1002 21:45:45.423722 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423727 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.423735 1112716 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1002 21:45:45.423748 1112716 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1002 21:45:45.423755 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423763 1112716 command_runner.go:130] >       "size": "182203183",
	I1002 21:45:45.423768 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.423773 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.423778 1112716 command_runner.go:130] >       },
	I1002 21:45:45.423785 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.423792 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.423800 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.423804 1112716 command_runner.go:130] >     },
	I1002 21:45:45.423811 1112716 command_runner.go:130] >     {
	I1002 21:45:45.423819 1112716 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1002 21:45:45.423825 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.423833 1112716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1002 21:45:45.423846 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423853 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.423863 1112716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1002 21:45:45.423872 1112716 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1002 21:45:45.423879 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423884 1112716 command_runner.go:130] >       "size": "121054158",
	I1002 21:45:45.423898 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.423903 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.423908 1112716 command_runner.go:130] >       },
	I1002 21:45:45.423915 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.423920 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.423925 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.423932 1112716 command_runner.go:130] >     },
	I1002 21:45:45.423937 1112716 command_runner.go:130] >     {
	I1002 21:45:45.423944 1112716 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1002 21:45:45.423951 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.423958 1112716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1002 21:45:45.423964 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.423969 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.423981 1112716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1002 21:45:45.423993 1112716 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1002 21:45:45.423998 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.424007 1112716 command_runner.go:130] >       "size": "117187380",
	I1002 21:45:45.424012 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.424017 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.424021 1112716 command_runner.go:130] >       },
	I1002 21:45:45.424029 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.424034 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.424039 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.424046 1112716 command_runner.go:130] >     },
	I1002 21:45:45.424054 1112716 command_runner.go:130] >     {
	I1002 21:45:45.424062 1112716 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1002 21:45:45.424069 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.424075 1112716 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1002 21:45:45.424080 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.424087 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.424097 1112716 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1002 21:45:45.424109 1112716 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1002 21:45:45.424114 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.424119 1112716 command_runner.go:130] >       "size": "69926807",
	I1002 21:45:45.424125 1112716 command_runner.go:130] >       "uid": null,
	I1002 21:45:45.424132 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.424139 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.424145 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.424152 1112716 command_runner.go:130] >     },
	I1002 21:45:45.424156 1112716 command_runner.go:130] >     {
	I1002 21:45:45.424166 1112716 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1002 21:45:45.424173 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.424180 1112716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1002 21:45:45.424188 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.424194 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.424229 1112716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1002 21:45:45.424243 1112716 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1002 21:45:45.424249 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.424257 1112716 command_runner.go:130] >       "size": "59188020",
	I1002 21:45:45.424262 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.424269 1112716 command_runner.go:130] >         "value": "0"
	I1002 21:45:45.424274 1112716 command_runner.go:130] >       },
	I1002 21:45:45.424279 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.424284 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.424289 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.424294 1112716 command_runner.go:130] >     },
	I1002 21:45:45.424301 1112716 command_runner.go:130] >     {
	I1002 21:45:45.424312 1112716 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1002 21:45:45.424325 1112716 command_runner.go:130] >       "repoTags": [
	I1002 21:45:45.424331 1112716 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1002 21:45:45.424336 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.424343 1112716 command_runner.go:130] >       "repoDigests": [
	I1002 21:45:45.424353 1112716 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1002 21:45:45.424362 1112716 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1002 21:45:45.424369 1112716 command_runner.go:130] >       ],
	I1002 21:45:45.424374 1112716 command_runner.go:130] >       "size": "520014",
	I1002 21:45:45.424379 1112716 command_runner.go:130] >       "uid": {
	I1002 21:45:45.424386 1112716 command_runner.go:130] >         "value": "65535"
	I1002 21:45:45.424393 1112716 command_runner.go:130] >       },
	I1002 21:45:45.424398 1112716 command_runner.go:130] >       "username": "",
	I1002 21:45:45.424408 1112716 command_runner.go:130] >       "spec": null,
	I1002 21:45:45.424413 1112716 command_runner.go:130] >       "pinned": false
	I1002 21:45:45.424417 1112716 command_runner.go:130] >     }
	I1002 21:45:45.424422 1112716 command_runner.go:130] >   ]
	I1002 21:45:45.424428 1112716 command_runner.go:130] > }
	I1002 21:45:45.424564 1112716 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 21:45:45.424576 1112716 cache_images.go:84] Images are preloaded, skipping loading
	I1002 21:45:45.424650 1112716 ssh_runner.go:195] Run: crio config
	I1002 21:45:45.479916 1112716 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 21:45:45.479984 1112716 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 21:45:45.480008 1112716 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 21:45:45.480029 1112716 command_runner.go:130] > #
	I1002 21:45:45.480063 1112716 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 21:45:45.480091 1112716 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 21:45:45.480116 1112716 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 21:45:45.480148 1112716 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 21:45:45.480175 1112716 command_runner.go:130] > # reload'.
	I1002 21:45:45.480202 1112716 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 21:45:45.480225 1112716 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 21:45:45.480248 1112716 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 21:45:45.480281 1112716 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 21:45:45.480302 1112716 command_runner.go:130] > [crio]
	I1002 21:45:45.480324 1112716 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 21:45:45.480345 1112716 command_runner.go:130] > # containers images, in this directory.
	I1002 21:45:45.480582 1112716 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 21:45:45.480596 1112716 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 21:45:45.480854 1112716 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1002 21:45:45.480867 1112716 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 21:45:45.480876 1112716 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 21:45:45.480881 1112716 command_runner.go:130] > # storage_driver = "vfs"
	I1002 21:45:45.480888 1112716 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 21:45:45.480895 1112716 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 21:45:45.480900 1112716 command_runner.go:130] > # storage_option = [
	I1002 21:45:45.480905 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.480913 1112716 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 21:45:45.480921 1112716 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 21:45:45.480927 1112716 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 21:45:45.480934 1112716 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 21:45:45.480941 1112716 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 21:45:45.480947 1112716 command_runner.go:130] > # always happen on a node reboot
	I1002 21:45:45.480953 1112716 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 21:45:45.480960 1112716 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 21:45:45.480969 1112716 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 21:45:45.480986 1112716 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 21:45:45.480992 1112716 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 21:45:45.481002 1112716 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 21:45:45.481012 1112716 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 21:45:45.481017 1112716 command_runner.go:130] > # internal_wipe = true
	I1002 21:45:45.481024 1112716 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 21:45:45.481031 1112716 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 21:45:45.481038 1112716 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 21:45:45.481045 1112716 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 21:45:45.481058 1112716 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 21:45:45.481063 1112716 command_runner.go:130] > [crio.api]
	I1002 21:45:45.481069 1112716 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 21:45:45.481075 1112716 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 21:45:45.481081 1112716 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 21:45:45.481086 1112716 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 21:45:45.481094 1112716 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 21:45:45.481101 1112716 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 21:45:45.481106 1112716 command_runner.go:130] > # stream_port = "0"
	I1002 21:45:45.481112 1112716 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 21:45:45.481118 1112716 command_runner.go:130] > # stream_enable_tls = false
	I1002 21:45:45.481125 1112716 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 21:45:45.481132 1112716 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 21:45:45.481140 1112716 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 21:45:45.481147 1112716 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 21:45:45.481152 1112716 command_runner.go:130] > # minutes.
	I1002 21:45:45.481157 1112716 command_runner.go:130] > # stream_tls_cert = ""
	I1002 21:45:45.481164 1112716 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 21:45:45.481172 1112716 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 21:45:45.481177 1112716 command_runner.go:130] > # stream_tls_key = ""
	I1002 21:45:45.481184 1112716 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 21:45:45.481193 1112716 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 21:45:45.481200 1112716 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 21:45:45.481227 1112716 command_runner.go:130] > # stream_tls_ca = ""
	I1002 21:45:45.481237 1112716 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 21:45:45.481243 1112716 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 21:45:45.481251 1112716 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 21:45:45.481257 1112716 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 21:45:45.481286 1112716 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 21:45:45.481294 1112716 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 21:45:45.481299 1112716 command_runner.go:130] > [crio.runtime]
	I1002 21:45:45.481306 1112716 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 21:45:45.481312 1112716 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 21:45:45.481317 1112716 command_runner.go:130] > # "nofile=1024:2048"
	I1002 21:45:45.481325 1112716 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 21:45:45.481330 1112716 command_runner.go:130] > # default_ulimits = [
	I1002 21:45:45.481334 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481342 1112716 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 21:45:45.481354 1112716 command_runner.go:130] > # no_pivot = false
	I1002 21:45:45.481361 1112716 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 21:45:45.481369 1112716 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 21:45:45.481376 1112716 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 21:45:45.481383 1112716 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 21:45:45.481389 1112716 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 21:45:45.481397 1112716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 21:45:45.481402 1112716 command_runner.go:130] > # conmon = ""
	I1002 21:45:45.481408 1112716 command_runner.go:130] > # Cgroup setting for conmon
	I1002 21:45:45.481416 1112716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 21:45:45.481421 1112716 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 21:45:45.481428 1112716 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 21:45:45.481436 1112716 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 21:45:45.481444 1112716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 21:45:45.481448 1112716 command_runner.go:130] > # conmon_env = [
	I1002 21:45:45.481453 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481459 1112716 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 21:45:45.481465 1112716 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 21:45:45.481472 1112716 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 21:45:45.481476 1112716 command_runner.go:130] > # default_env = [
	I1002 21:45:45.481480 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481487 1112716 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 21:45:45.481492 1112716 command_runner.go:130] > # selinux = false
	I1002 21:45:45.481500 1112716 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 21:45:45.481507 1112716 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 21:45:45.481516 1112716 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 21:45:45.481521 1112716 command_runner.go:130] > # seccomp_profile = ""
	I1002 21:45:45.481528 1112716 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 21:45:45.481535 1112716 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 21:45:45.481543 1112716 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 21:45:45.481548 1112716 command_runner.go:130] > # which might increase security.
	I1002 21:45:45.481554 1112716 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1002 21:45:45.481562 1112716 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 21:45:45.481569 1112716 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 21:45:45.481576 1112716 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 21:45:45.481584 1112716 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 21:45:45.481590 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:45:45.481596 1112716 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 21:45:45.481604 1112716 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 21:45:45.481610 1112716 command_runner.go:130] > # the cgroup blockio controller.
	I1002 21:45:45.481615 1112716 command_runner.go:130] > # blockio_config_file = ""
	I1002 21:45:45.481623 1112716 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 21:45:45.481628 1112716 command_runner.go:130] > # irqbalance daemon.
	I1002 21:45:45.481635 1112716 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 21:45:45.481643 1112716 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 21:45:45.481651 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:45:45.481656 1112716 command_runner.go:130] > # rdt_config_file = ""
	I1002 21:45:45.481662 1112716 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 21:45:45.481667 1112716 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 21:45:45.481675 1112716 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 21:45:45.481681 1112716 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 21:45:45.481689 1112716 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 21:45:45.481696 1112716 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 21:45:45.481701 1112716 command_runner.go:130] > # will be added.
	I1002 21:45:45.481706 1112716 command_runner.go:130] > # default_capabilities = [
	I1002 21:45:45.481710 1112716 command_runner.go:130] > # 	"CHOWN",
	I1002 21:45:45.481715 1112716 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 21:45:45.481719 1112716 command_runner.go:130] > # 	"FSETID",
	I1002 21:45:45.481724 1112716 command_runner.go:130] > # 	"FOWNER",
	I1002 21:45:45.481728 1112716 command_runner.go:130] > # 	"SETGID",
	I1002 21:45:45.481732 1112716 command_runner.go:130] > # 	"SETUID",
	I1002 21:45:45.481737 1112716 command_runner.go:130] > # 	"SETPCAP",
	I1002 21:45:45.481742 1112716 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 21:45:45.481746 1112716 command_runner.go:130] > # 	"KILL",
	I1002 21:45:45.481750 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481760 1112716 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 21:45:45.481767 1112716 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 21:45:45.481774 1112716 command_runner.go:130] > # add_inheritable_capabilities = true
	I1002 21:45:45.481781 1112716 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 21:45:45.481788 1112716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 21:45:45.481793 1112716 command_runner.go:130] > # default_sysctls = [
	I1002 21:45:45.481797 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481803 1112716 command_runner.go:130] > # List of devices on the host that a
	I1002 21:45:45.481811 1112716 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 21:45:45.481816 1112716 command_runner.go:130] > # allowed_devices = [
	I1002 21:45:45.481820 1112716 command_runner.go:130] > # 	"/dev/fuse",
	I1002 21:45:45.481825 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481831 1112716 command_runner.go:130] > # List of additional devices. specified as
	I1002 21:45:45.481857 1112716 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 21:45:45.481865 1112716 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 21:45:45.481872 1112716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 21:45:45.481879 1112716 command_runner.go:130] > # additional_devices = [
	I1002 21:45:45.481883 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481889 1112716 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 21:45:45.481894 1112716 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 21:45:45.481898 1112716 command_runner.go:130] > # 	"/etc/cdi",
	I1002 21:45:45.481903 1112716 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 21:45:45.481907 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481915 1112716 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 21:45:45.481922 1112716 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 21:45:45.481927 1112716 command_runner.go:130] > # Defaults to false.
	I1002 21:45:45.481933 1112716 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 21:45:45.481940 1112716 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 21:45:45.481948 1112716 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 21:45:45.481953 1112716 command_runner.go:130] > # hooks_dir = [
	I1002 21:45:45.481959 1112716 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 21:45:45.481963 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.481971 1112716 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 21:45:45.481978 1112716 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 21:45:45.481985 1112716 command_runner.go:130] > # its default mounts from the following two files:
	I1002 21:45:45.481989 1112716 command_runner.go:130] > #
	I1002 21:45:45.481996 1112716 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 21:45:45.482004 1112716 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 21:45:45.482010 1112716 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 21:45:45.482014 1112716 command_runner.go:130] > #
	I1002 21:45:45.482022 1112716 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 21:45:45.482029 1112716 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 21:45:45.482037 1112716 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 21:45:45.482043 1112716 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 21:45:45.482048 1112716 command_runner.go:130] > #
	I1002 21:45:45.482053 1112716 command_runner.go:130] > # default_mounts_file = ""
	I1002 21:45:45.482059 1112716 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 21:45:45.482068 1112716 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 21:45:45.482432 1112716 command_runner.go:130] > # pids_limit = 0
	I1002 21:45:45.482447 1112716 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 21:45:45.482455 1112716 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 21:45:45.482463 1112716 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 21:45:45.482473 1112716 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 21:45:45.482478 1112716 command_runner.go:130] > # log_size_max = -1
	I1002 21:45:45.482486 1112716 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 21:45:45.482498 1112716 command_runner.go:130] > # log_to_journald = false
	I1002 21:45:45.482506 1112716 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 21:45:45.482512 1112716 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 21:45:45.482519 1112716 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 21:45:45.482525 1112716 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 21:45:45.482553 1112716 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 21:45:45.482559 1112716 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 21:45:45.482566 1112716 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 21:45:45.482571 1112716 command_runner.go:130] > # read_only = false
	I1002 21:45:45.482579 1112716 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 21:45:45.482586 1112716 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 21:45:45.482592 1112716 command_runner.go:130] > # live configuration reload.
	I1002 21:45:45.482599 1112716 command_runner.go:130] > # log_level = "info"
	I1002 21:45:45.482606 1112716 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 21:45:45.482612 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:45:45.482617 1112716 command_runner.go:130] > # log_filter = ""
	I1002 21:45:45.482624 1112716 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 21:45:45.482632 1112716 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 21:45:45.482637 1112716 command_runner.go:130] > # separated by comma.
	I1002 21:45:45.482641 1112716 command_runner.go:130] > # uid_mappings = ""
	I1002 21:45:45.482649 1112716 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 21:45:45.482656 1112716 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 21:45:45.482661 1112716 command_runner.go:130] > # separated by comma.
	I1002 21:45:45.482666 1112716 command_runner.go:130] > # gid_mappings = ""
	I1002 21:45:45.482673 1112716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 21:45:45.482681 1112716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 21:45:45.482688 1112716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 21:45:45.482694 1112716 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 21:45:45.482702 1112716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 21:45:45.482710 1112716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 21:45:45.482717 1112716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 21:45:45.482724 1112716 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 21:45:45.482731 1112716 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 21:45:45.482739 1112716 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 21:45:45.482746 1112716 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 21:45:45.482751 1112716 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 21:45:45.482758 1112716 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 21:45:45.482780 1112716 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 21:45:45.482790 1112716 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 21:45:45.482796 1112716 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 21:45:45.482801 1112716 command_runner.go:130] > # drop_infra_ctr = true
	I1002 21:45:45.482808 1112716 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 21:45:45.482815 1112716 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 21:45:45.482824 1112716 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 21:45:45.482829 1112716 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 21:45:45.482837 1112716 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 21:45:45.482843 1112716 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 21:45:45.483148 1112716 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 21:45:45.483165 1112716 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 21:45:45.483170 1112716 command_runner.go:130] > # pinns_path = ""
	I1002 21:45:45.483181 1112716 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 21:45:45.483189 1112716 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 21:45:45.483197 1112716 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 21:45:45.483202 1112716 command_runner.go:130] > # default_runtime = "runc"
	I1002 21:45:45.483209 1112716 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 21:45:45.483219 1112716 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 21:45:45.483230 1112716 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 21:45:45.483236 1112716 command_runner.go:130] > # creation as a file is not desired either.
	I1002 21:45:45.483246 1112716 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 21:45:45.483252 1112716 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 21:45:45.483258 1112716 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 21:45:45.483262 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.483270 1112716 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 21:45:45.483277 1112716 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 21:45:45.483285 1112716 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 21:45:45.483293 1112716 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 21:45:45.483297 1112716 command_runner.go:130] > #
	I1002 21:45:45.483303 1112716 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 21:45:45.483309 1112716 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 21:45:45.483316 1112716 command_runner.go:130] > #  runtime_type = "oci"
	I1002 21:45:45.483322 1112716 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 21:45:45.483328 1112716 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 21:45:45.483333 1112716 command_runner.go:130] > #  allowed_annotations = []
	I1002 21:45:45.483338 1112716 command_runner.go:130] > # Where:
	I1002 21:45:45.483344 1112716 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 21:45:45.483357 1112716 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 21:45:45.483365 1112716 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 21:45:45.483372 1112716 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 21:45:45.483377 1112716 command_runner.go:130] > #   in $PATH.
	I1002 21:45:45.483385 1112716 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 21:45:45.483391 1112716 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 21:45:45.483398 1112716 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 21:45:45.483402 1112716 command_runner.go:130] > #   state.
	I1002 21:45:45.483410 1112716 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 21:45:45.483419 1112716 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 21:45:45.483426 1112716 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 21:45:45.483435 1112716 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 21:45:45.483443 1112716 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 21:45:45.483451 1112716 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 21:45:45.483457 1112716 command_runner.go:130] > #   The currently recognized values are:
	I1002 21:45:45.483465 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 21:45:45.483474 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 21:45:45.483481 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 21:45:45.483488 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 21:45:45.483498 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 21:45:45.483505 1112716 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 21:45:45.483513 1112716 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 21:45:45.483521 1112716 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 21:45:45.483527 1112716 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 21:45:45.483532 1112716 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 21:45:45.483538 1112716 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1002 21:45:45.483543 1112716 command_runner.go:130] > runtime_type = "oci"
	I1002 21:45:45.483549 1112716 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 21:45:45.483554 1112716 command_runner.go:130] > runtime_config_path = ""
	I1002 21:45:45.483559 1112716 command_runner.go:130] > monitor_path = ""
	I1002 21:45:45.483564 1112716 command_runner.go:130] > monitor_cgroup = ""
	I1002 21:45:45.483570 1112716 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 21:45:45.483686 1112716 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 21:45:45.483697 1112716 command_runner.go:130] > # running containers
	I1002 21:45:45.483702 1112716 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 21:45:45.483710 1112716 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 21:45:45.483747 1112716 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 21:45:45.483757 1112716 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 21:45:45.483764 1112716 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 21:45:45.483770 1112716 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 21:45:45.483775 1112716 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 21:45:45.483781 1112716 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 21:45:45.483787 1112716 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 21:45:45.483792 1112716 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 21:45:45.483800 1112716 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 21:45:45.483806 1112716 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 21:45:45.483814 1112716 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 21:45:45.483846 1112716 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 21:45:45.483859 1112716 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 21:45:45.483866 1112716 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 21:45:45.483877 1112716 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 21:45:45.483887 1112716 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 21:45:45.483893 1112716 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 21:45:45.483903 1112716 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 21:45:45.483930 1112716 command_runner.go:130] > # Example:
	I1002 21:45:45.483939 1112716 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 21:45:45.483945 1112716 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 21:45:45.483951 1112716 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 21:45:45.483958 1112716 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 21:45:45.483962 1112716 command_runner.go:130] > # cpuset = 0
	I1002 21:45:45.483967 1112716 command_runner.go:130] > # cpushares = "0-1"
	I1002 21:45:45.483972 1112716 command_runner.go:130] > # Where:
	I1002 21:45:45.483977 1112716 command_runner.go:130] > # The workload name is workload-type.
	I1002 21:45:45.483986 1112716 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 21:45:45.483993 1112716 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 21:45:45.484022 1112716 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 21:45:45.484034 1112716 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 21:45:45.484042 1112716 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 21:45:45.484046 1112716 command_runner.go:130] > # 
	I1002 21:45:45.484054 1112716 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 21:45:45.484058 1112716 command_runner.go:130] > #
	I1002 21:45:45.484068 1112716 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 21:45:45.484076 1112716 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 21:45:45.484084 1112716 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 21:45:45.484118 1112716 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 21:45:45.484126 1112716 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 21:45:45.484131 1112716 command_runner.go:130] > [crio.image]
	I1002 21:45:45.484138 1112716 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 21:45:45.484143 1112716 command_runner.go:130] > # default_transport = "docker://"
	I1002 21:45:45.484151 1112716 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 21:45:45.484159 1112716 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 21:45:45.484164 1112716 command_runner.go:130] > # global_auth_file = ""
	I1002 21:45:45.484170 1112716 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 21:45:45.484198 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:45:45.484207 1112716 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 21:45:45.484215 1112716 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 21:45:45.484223 1112716 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 21:45:45.484229 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:45:45.484235 1112716 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 21:45:45.484242 1112716 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 21:45:45.484249 1112716 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 21:45:45.484257 1112716 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 21:45:45.484264 1112716 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 21:45:45.484292 1112716 command_runner.go:130] > # pause_command = "/pause"
	I1002 21:45:45.484302 1112716 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 21:45:45.484310 1112716 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 21:45:45.484318 1112716 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 21:45:45.484325 1112716 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 21:45:45.484332 1112716 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 21:45:45.484337 1112716 command_runner.go:130] > # signature_policy = ""
	I1002 21:45:45.484436 1112716 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 21:45:45.484449 1112716 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 21:45:45.484454 1112716 command_runner.go:130] > # changing them here.
	I1002 21:45:45.484460 1112716 command_runner.go:130] > # insecure_registries = [
	I1002 21:45:45.484464 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.484471 1112716 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 21:45:45.484478 1112716 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 21:45:45.484487 1112716 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 21:45:45.484494 1112716 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 21:45:45.484523 1112716 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 21:45:45.484534 1112716 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 21:45:45.484539 1112716 command_runner.go:130] > # CNI plugins.
	I1002 21:45:45.484544 1112716 command_runner.go:130] > [crio.network]
	I1002 21:45:45.484551 1112716 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 21:45:45.484558 1112716 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 21:45:45.484563 1112716 command_runner.go:130] > # cni_default_network = ""
	I1002 21:45:45.484571 1112716 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 21:45:45.484576 1112716 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 21:45:45.484583 1112716 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 21:45:45.484611 1112716 command_runner.go:130] > # plugin_dirs = [
	I1002 21:45:45.484628 1112716 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 21:45:45.484644 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.484667 1112716 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 21:45:45.484696 1112716 command_runner.go:130] > [crio.metrics]
	I1002 21:45:45.484723 1112716 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 21:45:45.484744 1112716 command_runner.go:130] > # enable_metrics = false
	I1002 21:45:45.484764 1112716 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 21:45:45.484774 1112716 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 21:45:45.484782 1112716 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 21:45:45.484812 1112716 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 21:45:45.484834 1112716 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 21:45:45.484855 1112716 command_runner.go:130] > # metrics_collectors = [
	I1002 21:45:45.484884 1112716 command_runner.go:130] > # 	"operations",
	I1002 21:45:45.484906 1112716 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 21:45:45.484926 1112716 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 21:45:45.484950 1112716 command_runner.go:130] > # 	"operations_errors",
	I1002 21:45:45.484982 1112716 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 21:45:45.485006 1112716 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 21:45:45.485030 1112716 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 21:45:45.485054 1112716 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 21:45:45.485086 1112716 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 21:45:45.485111 1112716 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 21:45:45.485131 1112716 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 21:45:45.485153 1112716 command_runner.go:130] > # 	"containers_oom_total",
	I1002 21:45:45.485173 1112716 command_runner.go:130] > # 	"containers_oom",
	I1002 21:45:45.485227 1112716 command_runner.go:130] > # 	"processes_defunct",
	I1002 21:45:45.485255 1112716 command_runner.go:130] > # 	"operations_total",
	I1002 21:45:45.485285 1112716 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 21:45:45.485309 1112716 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 21:45:45.485329 1112716 command_runner.go:130] > # 	"operations_errors_total",
	I1002 21:45:45.485351 1112716 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 21:45:45.485382 1112716 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 21:45:45.485408 1112716 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 21:45:45.485426 1112716 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 21:45:45.485445 1112716 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 21:45:45.485478 1112716 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 21:45:45.485499 1112716 command_runner.go:130] > # ]
	I1002 21:45:45.485519 1112716 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 21:45:45.485539 1112716 command_runner.go:130] > # metrics_port = 9090
	I1002 21:45:45.485560 1112716 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 21:45:45.485588 1112716 command_runner.go:130] > # metrics_socket = ""
	I1002 21:45:45.485614 1112716 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 21:45:45.485641 1112716 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 21:45:45.485666 1112716 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 21:45:45.485697 1112716 command_runner.go:130] > # certificate on any modification event.
	I1002 21:45:45.485724 1112716 command_runner.go:130] > # metrics_cert = ""
	I1002 21:45:45.485746 1112716 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 21:45:45.485767 1112716 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 21:45:45.485786 1112716 command_runner.go:130] > # metrics_key = ""
	I1002 21:45:45.485822 1112716 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 21:45:45.485841 1112716 command_runner.go:130] > [crio.tracing]
	I1002 21:45:45.485863 1112716 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 21:45:45.485892 1112716 command_runner.go:130] > # enable_tracing = false
	I1002 21:45:45.485916 1112716 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 21:45:45.485936 1112716 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 21:45:45.485958 1112716 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 21:45:45.485987 1112716 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 21:45:45.486011 1112716 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 21:45:45.486028 1112716 command_runner.go:130] > [crio.stats]
	I1002 21:45:45.486048 1112716 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 21:45:45.486070 1112716 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 21:45:45.486097 1112716 command_runner.go:130] > # stats_collection_period = 0
	I1002 21:45:45.487315 1112716 command_runner.go:130] ! time="2023-10-02 21:45:45.477495897Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1002 21:45:45.487339 1112716 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 21:45:45.487449 1112716 cni.go:84] Creating CNI manager for ""
	I1002 21:45:45.487468 1112716 cni.go:136] 1 nodes found, recommending kindnet
	I1002 21:45:45.487509 1112716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 21:45:45.487535 1112716 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-629060 NodeName:multinode-629060 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:45:45.487717 1112716 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-629060"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:45:45.487797 1112716 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-629060 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-629060 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 21:45:45.487868 1112716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 21:45:45.498692 1112716 command_runner.go:130] > kubeadm
	I1002 21:45:45.498721 1112716 command_runner.go:130] > kubectl
	I1002 21:45:45.498726 1112716 command_runner.go:130] > kubelet
	I1002 21:45:45.498754 1112716 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:45:45.498864 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:45:45.510834 1112716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1002 21:45:45.532181 1112716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:45:45.553639 1112716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1002 21:45:45.575251 1112716 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:45:45.579945 1112716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:45:45.593434 1112716 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060 for IP: 192.168.58.2
	I1002 21:45:45.593464 1112716 certs.go:190] acquiring lock for shared ca certs: {Name:mk89a4b04b53a0a6e55cb9c88355018fadb8a1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:45.593618 1112716 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key
	I1002 21:45:45.593658 1112716 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key
	I1002 21:45:45.593703 1112716 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key
	I1002 21:45:45.593714 1112716 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt with IP's: []
	I1002 21:45:46.422863 1112716 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt ...
	I1002 21:45:46.422896 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt: {Name:mk986b38872ed9c71682d05dd4a322f713aac75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:46.423109 1112716 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key ...
	I1002 21:45:46.423123 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key: {Name:mk61b228a5ea7013646422500a4d54dd1df3aece Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:46.423217 1112716 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.key.cee25041
	I1002 21:45:46.423232 1112716 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 21:45:47.217421 1112716 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.crt.cee25041 ...
	I1002 21:45:47.217455 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.crt.cee25041: {Name:mka01ae829eb0d939a139c55c72f039038d70e57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:47.217685 1112716 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.key.cee25041 ...
	I1002 21:45:47.217699 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.key.cee25041: {Name:mk69f0004e7c6be3911745831cc963553ee693fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:47.217796 1112716 certs.go:337] copying /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.crt
	I1002 21:45:47.217896 1112716 certs.go:341] copying /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.key
	I1002 21:45:47.217968 1112716 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.key
	I1002 21:45:47.217985 1112716 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.crt with IP's: []
	I1002 21:45:47.586422 1112716 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.crt ...
	I1002 21:45:47.586454 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.crt: {Name:mk9645de12b916a590173283e4fb6ad4af2fdbc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:47.586676 1112716 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.key ...
	I1002 21:45:47.586691 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.key: {Name:mkbe02daf24cb11e0ecc94f469b6bf573440bc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:47.586781 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:45:47.586815 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:45:47.586836 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:45:47.586850 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:45:47.586872 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:45:47.586892 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:45:47.586907 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:45:47.586923 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:45:47.586983 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem (1338 bytes)
	W1002 21:45:47.587026 1112716 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732_empty.pem, impossibly tiny 0 bytes
	I1002 21:45:47.587039 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:45:47.587066 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:45:47.587097 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:45:47.587136 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem (1679 bytes)
	I1002 21:45:47.587188 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:45:47.587234 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> /usr/share/ca-certificates/10477322.pem
	I1002 21:45:47.587254 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:47.587269 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem -> /usr/share/ca-certificates/1047732.pem
	I1002 21:45:47.588040 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 21:45:47.619149 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:45:47.649720 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:45:47.678750 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 21:45:47.708405 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:45:47.736763 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:45:47.765896 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:45:47.800378 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:45:47.828934 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /usr/share/ca-certificates/10477322.pem (1708 bytes)
	I1002 21:45:47.858374 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:45:47.888153 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem --> /usr/share/ca-certificates/1047732.pem (1338 bytes)
	I1002 21:45:47.916840 1112716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:45:47.938307 1112716 ssh_runner.go:195] Run: openssl version
	I1002 21:45:47.945498 1112716 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 21:45:47.945979 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10477322.pem && ln -fs /usr/share/ca-certificates/10477322.pem /etc/ssl/certs/10477322.pem"
	I1002 21:45:47.958430 1112716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10477322.pem
	I1002 21:45:47.963262 1112716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 21:45:47.963342 1112716 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 21:45:47.963424 1112716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10477322.pem
	I1002 21:45:47.971693 1112716 command_runner.go:130] > 3ec20f2e
	I1002 21:45:47.972064 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10477322.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:45:47.983907 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:45:47.995324 1112716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:47.999944 1112716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:47.999967 1112716 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:48.000026 1112716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:48.010185 1112716 command_runner.go:130] > b5213941
	I1002 21:45:48.010627 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:45:48.023571 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1047732.pem && ln -fs /usr/share/ca-certificates/1047732.pem /etc/ssl/certs/1047732.pem"
	I1002 21:45:48.036685 1112716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1047732.pem
	I1002 21:45:48.041882 1112716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 21:45:48.041945 1112716 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 21:45:48.042018 1112716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1047732.pem
	I1002 21:45:48.051029 1112716 command_runner.go:130] > 51391683
	I1002 21:45:48.051482 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1047732.pem /etc/ssl/certs/51391683.0"
	I1002 21:45:48.063931 1112716 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 21:45:48.068634 1112716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 21:45:48.068675 1112716 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 21:45:48.068716 1112716 kubeadm.go:404] StartCluster: {Name:multinode-629060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-629060 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:45:48.068801 1112716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:45:48.068856 1112716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:45:48.115432 1112716 cri.go:89] found id: ""
	I1002 21:45:48.115561 1112716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:45:48.126795 1112716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1002 21:45:48.126821 1112716 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1002 21:45:48.126831 1112716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1002 21:45:48.126906 1112716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:45:48.138560 1112716 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:45:48.138631 1112716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:45:48.149913 1112716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1002 21:45:48.149940 1112716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1002 21:45:48.149950 1112716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1002 21:45:48.149961 1112716 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:45:48.149984 1112716 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:45:48.150021 1112716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:45:48.261463 1112716 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 21:45:48.261494 1112716 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 21:45:48.366374 1112716 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:45:48.366412 1112716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:46:06.015657 1112716 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1002 21:46:06.015682 1112716 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1002 21:46:06.015720 1112716 kubeadm.go:322] [preflight] Running pre-flight checks
	I1002 21:46:06.015727 1112716 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 21:46:06.015806 1112716 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:46:06.015813 1112716 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:46:06.015864 1112716 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1045-aws
	I1002 21:46:06.015869 1112716 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 21:46:06.015900 1112716 kubeadm.go:322] OS: Linux
	I1002 21:46:06.015905 1112716 command_runner.go:130] > OS: Linux
	I1002 21:46:06.015946 1112716 kubeadm.go:322] CGROUPS_CPU: enabled
	I1002 21:46:06.015951 1112716 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 21:46:06.015995 1112716 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1002 21:46:06.016000 1112716 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 21:46:06.016043 1112716 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1002 21:46:06.016048 1112716 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 21:46:06.016092 1112716 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1002 21:46:06.016097 1112716 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 21:46:06.016141 1112716 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1002 21:46:06.016146 1112716 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 21:46:06.016193 1112716 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1002 21:46:06.016198 1112716 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 21:46:06.016239 1112716 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1002 21:46:06.016245 1112716 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 21:46:06.016289 1112716 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1002 21:46:06.016294 1112716 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 21:46:06.016336 1112716 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1002 21:46:06.016342 1112716 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 21:46:06.016407 1112716 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:46:06.016412 1112716 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:46:06.016499 1112716 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:46:06.016504 1112716 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:46:06.016589 1112716 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 21:46:06.016594 1112716 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1002 21:46:06.016651 1112716 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:46:06.019278 1112716 out.go:204]   - Generating certificates and keys ...
	I1002 21:46:06.016853 1112716 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:46:06.019378 1112716 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1002 21:46:06.019391 1112716 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1002 21:46:06.019449 1112716 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1002 21:46:06.019455 1112716 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1002 21:46:06.019516 1112716 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:46:06.019522 1112716 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:46:06.019574 1112716 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:46:06.019578 1112716 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:46:06.019638 1112716 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:46:06.019644 1112716 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1002 21:46:06.019690 1112716 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1002 21:46:06.019695 1112716 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1002 21:46:06.019745 1112716 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1002 21:46:06.019750 1112716 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1002 21:46:06.019860 1112716 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-629060] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:46:06.019866 1112716 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-629060] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:46:06.019914 1112716 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1002 21:46:06.019919 1112716 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1002 21:46:06.020028 1112716 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-629060] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:46:06.020033 1112716 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-629060] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:46:06.020093 1112716 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:46:06.020098 1112716 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:46:06.020156 1112716 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:46:06.020161 1112716 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:46:06.020202 1112716 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1002 21:46:06.020207 1112716 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1002 21:46:06.020257 1112716 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:46:06.020263 1112716 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:46:06.020310 1112716 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:46:06.020316 1112716 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:46:06.020364 1112716 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:46:06.020369 1112716 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:46:06.020428 1112716 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:46:06.020433 1112716 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:46:06.020483 1112716 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:46:06.020493 1112716 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:46:06.020568 1112716 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:46:06.020573 1112716 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:46:06.020634 1112716 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:46:06.022973 1112716 out.go:204]   - Booting up control plane ...
	I1002 21:46:06.020827 1112716 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:46:06.023157 1112716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:46:06.023174 1112716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:46:06.023258 1112716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:46:06.023269 1112716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:46:06.023338 1112716 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:46:06.023353 1112716 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:46:06.023464 1112716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:46:06.023472 1112716 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:46:06.023558 1112716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:46:06.023565 1112716 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:46:06.023605 1112716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 21:46:06.023613 1112716 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1002 21:46:06.023769 1112716 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 21:46:06.023777 1112716 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1002 21:46:06.023854 1112716 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502476 seconds
	I1002 21:46:06.023861 1112716 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502476 seconds
	I1002 21:46:06.023969 1112716 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:46:06.023978 1112716 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 21:46:06.024104 1112716 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:46:06.024112 1112716 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 21:46:06.024171 1112716 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:46:06.024178 1112716 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 21:46:06.024364 1112716 command_runner.go:130] > [mark-control-plane] Marking the node multinode-629060 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:46:06.024371 1112716 kubeadm.go:322] [mark-control-plane] Marking the node multinode-629060 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 21:46:06.024429 1112716 command_runner.go:130] > [bootstrap-token] Using token: vzvibp.hhq365bl4s5tqfrt
	I1002 21:46:06.024438 1112716 kubeadm.go:322] [bootstrap-token] Using token: vzvibp.hhq365bl4s5tqfrt
	I1002 21:46:06.026766 1112716 out.go:204]   - Configuring RBAC rules ...
	I1002 21:46:06.026960 1112716 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:46:06.026997 1112716 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 21:46:06.027109 1112716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:46:06.027124 1112716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 21:46:06.027266 1112716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:46:06.027276 1112716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 21:46:06.027405 1112716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:46:06.027413 1112716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 21:46:06.027529 1112716 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:46:06.027538 1112716 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 21:46:06.027643 1112716 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:46:06.027652 1112716 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 21:46:06.027771 1112716 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:46:06.027783 1112716 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 21:46:06.027828 1112716 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1002 21:46:06.027836 1112716 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1002 21:46:06.027883 1112716 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1002 21:46:06.027891 1112716 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1002 21:46:06.027896 1112716 kubeadm.go:322] 
	I1002 21:46:06.027957 1112716 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1002 21:46:06.027966 1112716 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1002 21:46:06.027971 1112716 kubeadm.go:322] 
	I1002 21:46:06.028050 1112716 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1002 21:46:06.028058 1112716 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1002 21:46:06.028062 1112716 kubeadm.go:322] 
	I1002 21:46:06.028089 1112716 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1002 21:46:06.028097 1112716 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1002 21:46:06.028157 1112716 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:46:06.028165 1112716 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 21:46:06.028217 1112716 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:46:06.028225 1112716 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 21:46:06.028230 1112716 kubeadm.go:322] 
	I1002 21:46:06.028285 1112716 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1002 21:46:06.028292 1112716 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1002 21:46:06.028297 1112716 kubeadm.go:322] 
	I1002 21:46:06.028348 1112716 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:46:06.028356 1112716 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 21:46:06.028361 1112716 kubeadm.go:322] 
	I1002 21:46:06.028415 1112716 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1002 21:46:06.028423 1112716 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1002 21:46:06.028499 1112716 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:46:06.028506 1112716 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 21:46:06.028575 1112716 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:46:06.028583 1112716 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 21:46:06.028588 1112716 kubeadm.go:322] 
	I1002 21:46:06.028674 1112716 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:46:06.028681 1112716 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 21:46:06.028759 1112716 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1002 21:46:06.028767 1112716 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1002 21:46:06.028772 1112716 kubeadm.go:322] 
	I1002 21:46:06.028858 1112716 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vzvibp.hhq365bl4s5tqfrt \
	I1002 21:46:06.028865 1112716 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vzvibp.hhq365bl4s5tqfrt \
	I1002 21:46:06.028970 1112716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 \
	I1002 21:46:06.028977 1112716 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 \
	I1002 21:46:06.029000 1112716 command_runner.go:130] > 	--control-plane 
	I1002 21:46:06.029007 1112716 kubeadm.go:322] 	--control-plane 
	I1002 21:46:06.029012 1112716 kubeadm.go:322] 
	I1002 21:46:06.029099 1112716 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:46:06.029106 1112716 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1002 21:46:06.029111 1112716 kubeadm.go:322] 
	I1002 21:46:06.029194 1112716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vzvibp.hhq365bl4s5tqfrt \
	I1002 21:46:06.029236 1112716 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vzvibp.hhq365bl4s5tqfrt \
	I1002 21:46:06.029364 1112716 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 
	I1002 21:46:06.029388 1112716 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 
	I1002 21:46:06.029423 1112716 cni.go:84] Creating CNI manager for ""
	I1002 21:46:06.029435 1112716 cni.go:136] 1 nodes found, recommending kindnet
	I1002 21:46:06.032160 1112716 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 21:46:06.034589 1112716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:46:06.041359 1112716 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 21:46:06.041388 1112716 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 21:46:06.041397 1112716 command_runner.go:130] > Device: 36h/54d	Inode: 1572688     Links: 1
	I1002 21:46:06.041404 1112716 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 21:46:06.041411 1112716 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 21:46:06.041417 1112716 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 21:46:06.041424 1112716 command_runner.go:130] > Change: 2023-10-02 21:23:08.933130862 +0000
	I1002 21:46:06.041437 1112716 command_runner.go:130] >  Birth: 2023-10-02 21:23:08.889131076 +0000
	I1002 21:46:06.041677 1112716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 21:46:06.041695 1112716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 21:46:06.101199 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:46:06.983340 1112716 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1002 21:46:06.991582 1112716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1002 21:46:07.004438 1112716 command_runner.go:130] > serviceaccount/kindnet created
	I1002 21:46:07.020582 1112716 command_runner.go:130] > daemonset.apps/kindnet created
	I1002 21:46:07.021935 1112716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 21:46:07.022068 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:07.022149 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86 minikube.k8s.io/name=multinode-629060 minikube.k8s.io/updated_at=2023_10_02T21_46_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:07.183759 1112716 command_runner.go:130] > node/multinode-629060 labeled
	I1002 21:46:07.184970 1112716 command_runner.go:130] > -16
	I1002 21:46:07.185007 1112716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1002 21:46:07.185044 1112716 ops.go:34] apiserver oom_adj: -16
	I1002 21:46:07.185113 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:07.306221 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:07.306325 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:07.402606 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:07.903286 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:07.987732 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:08.403339 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:08.494533 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:08.902849 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:08.997891 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:09.403322 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:09.496557 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:09.902910 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:09.993656 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:10.402907 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:10.497359 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:10.903407 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:10.995318 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:11.402835 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:11.490737 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:11.903089 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:11.993928 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:12.402842 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:12.496308 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:12.903095 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:12.988168 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:13.403625 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:13.495698 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:13.903250 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:14.004218 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:14.403551 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:14.497689 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:14.903006 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:14.997790 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:15.403383 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:15.500326 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:15.902833 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:15.990967 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:16.402967 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:16.496192 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:16.903821 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:16.997575 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:17.403094 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:17.512219 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:17.902912 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:17.991279 1112716 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1002 21:46:18.403493 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 21:46:18.527648 1112716 command_runner.go:130] > NAME      SECRETS   AGE
	I1002 21:46:18.527668 1112716 command_runner.go:130] > default   0         0s
	I1002 21:46:18.530784 1112716 kubeadm.go:1081] duration metric: took 11.50876114s to wait for elevateKubeSystemPrivileges.
	I1002 21:46:18.530813 1112716 kubeadm.go:406] StartCluster complete in 30.462100746s
	I1002 21:46:18.530831 1112716 settings.go:142] acquiring lock: {Name:mk84ed9b341869374b10cf082af1bfa542d39dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:46:18.530896 1112716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:46:18.531563 1112716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/kubeconfig: {Name:mk6186c13a5b804fd6de8f5697b568acedb59886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:46:18.532068 1112716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:46:18.532310 1112716 kapi.go:59] client config for multinode-629060: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:46:18.533471 1112716 config.go:182] Loaded profile config "multinode-629060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:46:18.533526 1112716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 21:46:18.533558 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 21:46:18.533569 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:18.533578 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:18.533585 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:18.533621 1112716 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 21:46:18.533679 1112716 addons.go:69] Setting storage-provisioner=true in profile "multinode-629060"
	I1002 21:46:18.533692 1112716 addons.go:231] Setting addon storage-provisioner=true in "multinode-629060"
	I1002 21:46:18.533747 1112716 host.go:66] Checking if "multinode-629060" exists ...
	I1002 21:46:18.533790 1112716 cert_rotation.go:137] Starting client certificate rotation controller
	I1002 21:46:18.533818 1112716 addons.go:69] Setting default-storageclass=true in profile "multinode-629060"
	I1002 21:46:18.533831 1112716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-629060"
	I1002 21:46:18.534116 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:46:18.534208 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:46:18.579416 1112716 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I1002 21:46:18.579436 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:18.579445 1112716 round_trippers.go:580]     Audit-Id: c7d900fd-3b27-4b5f-9bda-c5e5233fea4f
	I1002 21:46:18.579451 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:18.579457 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:18.579465 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:18.579471 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:18.579477 1112716 round_trippers.go:580]     Content-Length: 291
	I1002 21:46:18.579483 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:18 GMT
	I1002 21:46:18.579513 1112716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"308c3efb-883f-4d10-b233-122055076f8b","resourceVersion":"326","creationTimestamp":"2023-10-02T21:46:05Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 21:46:18.579895 1112716 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"308c3efb-883f-4d10-b233-122055076f8b","resourceVersion":"326","creationTimestamp":"2023-10-02T21:46:05Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 21:46:18.579942 1112716 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 21:46:18.579949 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:18.579956 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:18.579963 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:18.579969 1112716 round_trippers.go:473]     Content-Type: application/json
	I1002 21:46:18.582790 1112716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:46:18.583052 1112716 kapi.go:59] client config for multinode-629060: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:46:18.583294 1112716 addons.go:231] Setting addon default-storageclass=true in "multinode-629060"
	I1002 21:46:18.583319 1112716 host.go:66] Checking if "multinode-629060" exists ...
	I1002 21:46:18.583749 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:46:18.590819 1112716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:46:18.593342 1112716 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:46:18.593370 1112716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:46:18.593441 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:46:18.607693 1112716 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:46:18.607713 1112716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:46:18.607772 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:46:18.637772 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:46:18.658737 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:46:18.666484 1112716 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I1002 21:46:18.666504 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:18.666513 1112716 round_trippers.go:580]     Content-Length: 291
	I1002 21:46:18.666528 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:18 GMT
	I1002 21:46:18.666535 1112716 round_trippers.go:580]     Audit-Id: 8bd8ec19-2699-4259-b608-5e33ba67ed84
	I1002 21:46:18.666541 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:18.666547 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:18.666554 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:18.666560 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:18.675744 1112716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"308c3efb-883f-4d10-b233-122055076f8b","resourceVersion":"341","creationTimestamp":"2023-10-02T21:46:05Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 21:46:18.675911 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 21:46:18.675920 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:18.675928 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:18.675935 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:18.744159 1112716 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I1002 21:46:18.744190 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:18.744201 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:18.744210 1112716 round_trippers.go:580]     Content-Length: 291
	I1002 21:46:18.744218 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:18 GMT
	I1002 21:46:18.744225 1112716 round_trippers.go:580]     Audit-Id: 10d996d1-5714-4812-a0ad-595afdb10acd
	I1002 21:46:18.744231 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:18.744238 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:18.744245 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:18.744270 1112716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"308c3efb-883f-4d10-b233-122055076f8b","resourceVersion":"341","creationTimestamp":"2023-10-02T21:46:05Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1002 21:46:18.744377 1112716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-629060" context rescaled to 1 replicas
	I1002 21:46:18.744410 1112716 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:46:18.747736 1112716 out.go:177] * Verifying Kubernetes components...
	I1002 21:46:18.749588 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:46:18.809338 1112716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:46:18.829321 1112716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:46:18.836891 1112716 command_runner.go:130] > apiVersion: v1
	I1002 21:46:18.836912 1112716 command_runner.go:130] > data:
	I1002 21:46:18.836918 1112716 command_runner.go:130] >   Corefile: |
	I1002 21:46:18.836923 1112716 command_runner.go:130] >     .:53 {
	I1002 21:46:18.836927 1112716 command_runner.go:130] >         errors
	I1002 21:46:18.836933 1112716 command_runner.go:130] >         health {
	I1002 21:46:18.836939 1112716 command_runner.go:130] >            lameduck 5s
	I1002 21:46:18.836949 1112716 command_runner.go:130] >         }
	I1002 21:46:18.836956 1112716 command_runner.go:130] >         ready
	I1002 21:46:18.836963 1112716 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1002 21:46:18.836973 1112716 command_runner.go:130] >            pods insecure
	I1002 21:46:18.836985 1112716 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1002 21:46:18.836996 1112716 command_runner.go:130] >            ttl 30
	I1002 21:46:18.837001 1112716 command_runner.go:130] >         }
	I1002 21:46:18.837006 1112716 command_runner.go:130] >         prometheus :9153
	I1002 21:46:18.837016 1112716 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1002 21:46:18.837022 1112716 command_runner.go:130] >            max_concurrent 1000
	I1002 21:46:18.837028 1112716 command_runner.go:130] >         }
	I1002 21:46:18.837032 1112716 command_runner.go:130] >         cache 30
	I1002 21:46:18.837037 1112716 command_runner.go:130] >         loop
	I1002 21:46:18.837042 1112716 command_runner.go:130] >         reload
	I1002 21:46:18.837050 1112716 command_runner.go:130] >         loadbalance
	I1002 21:46:18.837055 1112716 command_runner.go:130] >     }
	I1002 21:46:18.837064 1112716 command_runner.go:130] > kind: ConfigMap
	I1002 21:46:18.837069 1112716 command_runner.go:130] > metadata:
	I1002 21:46:18.837076 1112716 command_runner.go:130] >   creationTimestamp: "2023-10-02T21:46:05Z"
	I1002 21:46:18.837084 1112716 command_runner.go:130] >   name: coredns
	I1002 21:46:18.837096 1112716 command_runner.go:130] >   namespace: kube-system
	I1002 21:46:18.837104 1112716 command_runner.go:130] >   resourceVersion: "239"
	I1002 21:46:18.837110 1112716 command_runner.go:130] >   uid: af646fee-6382-414f-98fc-aff451ec2f4a
	I1002 21:46:18.838605 1112716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 21:46:18.839017 1112716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:46:18.839284 1112716 kapi.go:59] client config for multinode-629060: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:46:18.839603 1112716 node_ready.go:35] waiting up to 6m0s for node "multinode-629060" to be "Ready" ...
	I1002 21:46:18.839677 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:18.839689 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:18.839700 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:18.839714 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:19.013929 1112716 round_trippers.go:574] Response Status: 200 OK in 174 milliseconds
	I1002 21:46:19.013954 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:19.013964 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:19.013971 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:19.013977 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:19.013983 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:19 GMT
	I1002 21:46:19.013990 1112716 round_trippers.go:580]     Audit-Id: 61bccfc8-5dd0-4720-9a99-5dddf995c2ef
	I1002 21:46:19.013999 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:19.102983 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"310","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 21:46:19.103876 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:19.103894 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:19.103905 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:19.103914 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:19.253475 1112716 round_trippers.go:574] Response Status: 200 OK in 149 milliseconds
	I1002 21:46:19.253500 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:19.253509 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:19.253516 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:19 GMT
	I1002 21:46:19.253522 1112716 round_trippers.go:580]     Audit-Id: f7b21cfa-0cd2-4ad1-bb4e-f810546c75a6
	I1002 21:46:19.253528 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:19.253534 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:19.253544 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:19.294421 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"310","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 21:46:19.600582 1112716 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1002 21:46:19.608082 1112716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1002 21:46:19.617845 1112716 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 21:46:19.628791 1112716 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1002 21:46:19.637296 1112716 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1002 21:46:19.648351 1112716 command_runner.go:130] > pod/storage-provisioner created
	I1002 21:46:19.654195 1112716 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1002 21:46:19.654328 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1002 21:46:19.654341 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:19.654351 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:19.654358 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:19.654452 1112716 command_runner.go:130] > configmap/coredns replaced
	I1002 21:46:19.654476 1112716 start.go:923] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1002 21:46:19.657662 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:19.657697 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:19.657707 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:19.657713 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:19.657720 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:19.657727 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:19.657733 1112716 round_trippers.go:580]     Content-Length: 1273
	I1002 21:46:19.657739 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:19 GMT
	I1002 21:46:19.657747 1112716 round_trippers.go:580]     Audit-Id: 293a0a88-fd83-4c27-bd30-3d7201e95492
	I1002 21:46:19.657890 1112716 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"372"},"items":[{"metadata":{"name":"standard","uid":"314f6f8f-4794-40a3-9210-d31c041438e4","resourceVersion":"357","creationTimestamp":"2023-10-02T21:46:19Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T21:46:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1002 21:46:19.658278 1112716 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"314f6f8f-4794-40a3-9210-d31c041438e4","resourceVersion":"357","creationTimestamp":"2023-10-02T21:46:19Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T21:46:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 21:46:19.658324 1112716 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1002 21:46:19.658329 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:19.658337 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:19.658344 1112716 round_trippers.go:473]     Content-Type: application/json
	I1002 21:46:19.658350 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:19.663595 1112716 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1002 21:46:19.663658 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:19.663681 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:19 GMT
	I1002 21:46:19.663711 1112716 round_trippers.go:580]     Audit-Id: b9246101-df91-4361-b124-aaf8b0440d40
	I1002 21:46:19.663724 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:19.663731 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:19.663737 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:19.663744 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:19.663751 1112716 round_trippers.go:580]     Content-Length: 1220
	I1002 21:46:19.663780 1112716 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"314f6f8f-4794-40a3-9210-d31c041438e4","resourceVersion":"357","creationTimestamp":"2023-10-02T21:46:19Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-02T21:46:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1002 21:46:19.666273 1112716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 21:46:19.668008 1112716 addons.go:502] enable addons completed in 1.134380724s: enabled=[storage-provisioner default-storageclass]
	I1002 21:46:19.795587 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:19.795613 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:19.795630 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:19.795640 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:19.798330 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:19.798357 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:19.798367 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:19.798373 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:19.798387 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:19.798394 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:19 GMT
	I1002 21:46:19.798400 1112716 round_trippers.go:580]     Audit-Id: ff203275-c450-4041-9c7d-b5d6b09eb88e
	I1002 21:46:19.798406 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:19.798550 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"310","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 21:46:20.296029 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:20.296050 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:20.296060 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:20.296068 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:20.298561 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:20.298628 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:20.298651 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:20.298673 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:20.298702 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:20.298736 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:20.298742 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:20 GMT
	I1002 21:46:20.298748 1112716 round_trippers.go:580]     Audit-Id: 348d46e7-ab19-43e5-8d70-5a8c11715f79
	I1002 21:46:20.298861 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"310","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 21:46:20.795060 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:20.795127 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:20.795151 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:20.795173 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:20.798455 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:20.798538 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:20.798561 1112716 round_trippers.go:580]     Audit-Id: 6abfb1db-dc23-48c0-9638-8c9ec62e6fb6
	I1002 21:46:20.798582 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:20.798615 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:20.798640 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:20.798664 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:20.798697 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:20 GMT
	I1002 21:46:20.798873 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"310","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 21:46:21.295839 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:21.295863 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:21.295873 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:21.295880 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:21.298498 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:21.298565 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:21.298588 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:21.298612 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:21.298644 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:21.298671 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:21 GMT
	I1002 21:46:21.298693 1112716 round_trippers.go:580]     Audit-Id: 1428c0f5-ea82-414b-abe9-c942f24b943b
	I1002 21:46:21.298715 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:21.298890 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"310","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1002 21:46:21.299335 1112716 node_ready.go:58] node "multinode-629060" has status "Ready":"False"
	I1002 21:46:21.795584 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:21.795606 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:21.795625 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:21.795632 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:21.798077 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:21.798102 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:21.798112 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:21.798119 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:21.798125 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:21 GMT
	I1002 21:46:21.798132 1112716 round_trippers.go:580]     Audit-Id: 42a28f51-6a29-4dbd-8e50-26730bfbdd6d
	I1002 21:46:21.798138 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:21.798148 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:21.798341 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:21.798739 1112716 node_ready.go:49] node "multinode-629060" has status "Ready":"True"
	I1002 21:46:21.798756 1112716 node_ready.go:38] duration metric: took 2.959134454s waiting for node "multinode-629060" to be "Ready" ...
	I1002 21:46:21.798767 1112716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:46:21.798861 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:46:21.798873 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:21.798883 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:21.798891 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:21.802563 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:21.802586 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:21.802595 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:21.802602 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:21.802620 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:21.802626 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:21 GMT
	I1002 21:46:21.802640 1112716 round_trippers.go:580]     Audit-Id: 68650ee5-016a-438f-b7e8-cd4b44c899a1
	I1002 21:46:21.802647 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:21.803471 1112716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"395"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"394","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1002 21:46:21.807367 1112716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5vhnn" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:21.807456 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5vhnn
	I1002 21:46:21.807468 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:21.807477 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:21.807485 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:21.809993 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:21.810016 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:21.810026 1112716 round_trippers.go:580]     Audit-Id: c85c5083-5ce5-4cbd-af5a-03337738e484
	I1002 21:46:21.810032 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:21.810039 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:21.810057 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:21.810068 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:21.810074 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:21 GMT
	I1002 21:46:21.810237 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"394","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1002 21:46:21.810744 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:21.810761 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:21.810772 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:21.810780 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:21.813047 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:21.813079 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:21.813095 1112716 round_trippers.go:580]     Audit-Id: 3e6b4805-8757-486c-ac84-a87b237b03c3
	I1002 21:46:21.813102 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:21.813108 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:21.813118 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:21.813130 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:21.813146 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:21 GMT
	I1002 21:46:21.813377 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:21.813970 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5vhnn
	I1002 21:46:21.813998 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:21.814015 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:21.814033 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:21.816619 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:21.816679 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:21.816701 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:21.816723 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:21.816760 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:21.816786 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:21 GMT
	I1002 21:46:21.816808 1112716 round_trippers.go:580]     Audit-Id: ad1e938e-4a12-4c37-b402-d7685c058e63
	I1002 21:46:21.816843 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:21.816977 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"394","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1002 21:46:21.817542 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:21.817562 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:21.817571 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:21.817578 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:21.819949 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:21.819971 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:21.819980 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:21.819987 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:21.819994 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:21.820000 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:21 GMT
	I1002 21:46:21.820010 1112716 round_trippers.go:580]     Audit-Id: d1c06c88-d46d-4d2e-a162-2670b4b73a89
	I1002 21:46:21.820017 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:21.820223 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:22.321395 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5vhnn
	I1002 21:46:22.321466 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:22.321481 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:22.321490 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:22.324080 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:22.324147 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:22.324164 1112716 round_trippers.go:580]     Audit-Id: 0990947f-7e49-4356-891f-0c119923e35f
	I1002 21:46:22.324183 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:22.324190 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:22.324196 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:22.324202 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:22.324211 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:22 GMT
	I1002 21:46:22.324342 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"394","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1002 21:46:22.324874 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:22.324891 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:22.324899 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:22.324906 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:22.327298 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:22.327322 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:22.327330 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:22.327345 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:22 GMT
	I1002 21:46:22.327351 1112716 round_trippers.go:580]     Audit-Id: e7c649db-a0d2-4b7e-844f-aa7bdfbbcdf5
	I1002 21:46:22.327358 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:22.327363 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:22.327370 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:22.327522 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:22.821027 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5vhnn
	I1002 21:46:22.821054 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:22.821064 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:22.821071 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:22.823619 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:22.823649 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:22.823658 1112716 round_trippers.go:580]     Audit-Id: b0f009c6-9d8e-4a96-bd83-02556dbabce3
	I1002 21:46:22.823666 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:22.823672 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:22.823679 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:22.823686 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:22.823695 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:22 GMT
	I1002 21:46:22.823827 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"394","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1002 21:46:22.824350 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:22.824364 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:22.824373 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:22.824386 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:22.826748 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:22.826781 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:22.826790 1112716 round_trippers.go:580]     Audit-Id: c2508b74-1677-4861-8146-5322649fe39f
	I1002 21:46:22.826797 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:22.826803 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:22.826812 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:22.826827 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:22.826833 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:22 GMT
	I1002 21:46:22.827051 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:23.320839 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5vhnn
	I1002 21:46:23.320860 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.320869 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.320877 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.323521 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.323548 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.323557 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.323564 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.323570 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.323577 1112716 round_trippers.go:580]     Audit-Id: 0a579111-1504-4eee-b62b-2e65505ee160
	I1002 21:46:23.323584 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.323590 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.323762 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"408","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1002 21:46:23.324302 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:23.324323 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.324331 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.324338 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.326867 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.326888 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.326898 1112716 round_trippers.go:580]     Audit-Id: da16d601-d662-416e-91b6-61e2346efe2a
	I1002 21:46:23.326905 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.326911 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.326917 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.326923 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.326929 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.327041 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:23.327435 1112716 pod_ready.go:92] pod "coredns-5dd5756b68-5vhnn" in "kube-system" namespace has status "Ready":"True"
	I1002 21:46:23.327445 1112716 pod_ready.go:81] duration metric: took 1.520051118s waiting for pod "coredns-5dd5756b68-5vhnn" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.327456 1112716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.327514 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629060
	I1002 21:46:23.327519 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.327527 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.327534 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.329989 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.330009 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.330018 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.330024 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.330053 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.330061 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.330068 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.330077 1112716 round_trippers.go:580]     Audit-Id: fc93974b-978a-4b6a-8cd0-43cf39a5dac4
	I1002 21:46:23.330320 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629060","namespace":"kube-system","uid":"6bb8beb8-c1c5-4b2c-9a6e-1b00db71d13a","resourceVersion":"287","creationTimestamp":"2023-10-02T21:46:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ed64f9398b8edc929707995e6df5dc48","kubernetes.io/config.mirror":"ed64f9398b8edc929707995e6df5dc48","kubernetes.io/config.seen":"2023-10-02T21:46:05.978598999Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1002 21:46:23.330788 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:23.330797 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.330806 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.330813 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.332907 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.332958 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.332980 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.333002 1112716 round_trippers.go:580]     Audit-Id: eb1283ad-52e7-48b6-b0c2-4c0748624c11
	I1002 21:46:23.333040 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.333066 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.333087 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.333109 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.333286 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:23.333697 1112716 pod_ready.go:92] pod "etcd-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:46:23.333714 1112716 pod_ready.go:81] duration metric: took 6.251174ms waiting for pod "etcd-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.333728 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.333784 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629060
	I1002 21:46:23.333794 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.333801 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.333808 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.336122 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.336143 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.336151 1112716 round_trippers.go:580]     Audit-Id: 0371ee91-6bb0-46a5-8eb9-45116b6833ed
	I1002 21:46:23.336158 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.336164 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.336170 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.336177 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.336187 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.336611 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629060","namespace":"kube-system","uid":"6a9fbd26-ddd8-4dcc-9a48-217bfab74392","resourceVersion":"293","creationTimestamp":"2023-10-02T21:46:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"408ea7957b5ef07fae9dc9a9d3933e01","kubernetes.io/config.mirror":"408ea7957b5ef07fae9dc9a9d3933e01","kubernetes.io/config.seen":"2023-10-02T21:45:57.949256597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1002 21:46:23.337168 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:23.337185 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.337194 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.337219 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.339448 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.339495 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.339516 1112716 round_trippers.go:580]     Audit-Id: d2a2d2e0-a2e5-4ba6-9f9f-6ee1f6bdbed4
	I1002 21:46:23.339538 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.339572 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.339595 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.339615 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.339636 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.339775 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:23.340180 1112716 pod_ready.go:92] pod "kube-apiserver-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:46:23.340198 1112716 pod_ready.go:81] duration metric: took 6.462291ms waiting for pod "kube-apiserver-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.340209 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.340269 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629060
	I1002 21:46:23.340279 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.340287 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.340294 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.342595 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.342614 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.342622 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.342629 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.342640 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.342646 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.342657 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.342664 1112716 round_trippers.go:580]     Audit-Id: ef5fe491-1263-44a5-a800-a51ceab8c296
	I1002 21:46:23.343040 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629060","namespace":"kube-system","uid":"7477711d-6adc-4851-994e-3d41d599f050","resourceVersion":"289","creationTimestamp":"2023-10-02T21:46:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"221d35ba8a34143028534e3bbeb90aec","kubernetes.io/config.mirror":"221d35ba8a34143028534e3bbeb90aec","kubernetes.io/config.seen":"2023-10-02T21:46:05.978610239Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1002 21:46:23.395744 1112716 request.go:629] Waited for 52.181553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:23.395865 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:23.395877 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.395923 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.395944 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.398453 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.398478 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.398486 1112716 round_trippers.go:580]     Audit-Id: 4478eb94-b4cf-4ace-bb6f-f3134c97de62
	I1002 21:46:23.398493 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.398500 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.398506 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.398513 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.398532 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.398861 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:23.399260 1112716 pod_ready.go:92] pod "kube-controller-manager-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:46:23.399279 1112716 pod_ready.go:81] duration metric: took 59.059227ms waiting for pod "kube-controller-manager-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.399293 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9slzp" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.595624 1112716 request.go:629] Waited for 196.262665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9slzp
	I1002 21:46:23.595687 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9slzp
	I1002 21:46:23.595693 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.595701 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.595712 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.598190 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.598215 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.598224 1112716 round_trippers.go:580]     Audit-Id: fee682cf-3b8c-4fa1-a478-d8ad999fc380
	I1002 21:46:23.598230 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.598236 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.598243 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.598250 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.598259 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.598931 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9slzp","generateName":"kube-proxy-","namespace":"kube-system","uid":"053392fd-91ec-4cc0-98c3-d35660bbe40b","resourceVersion":"383","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1af0d895-df42-437e-b5ac-d12205e17520","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1af0d895-df42-437e-b5ac-d12205e17520\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1002 21:46:23.795653 1112716 request.go:629] Waited for 196.22959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:23.795715 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:23.795725 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.795736 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.795747 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:23.798273 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:23.798297 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:23.798305 1112716 round_trippers.go:580]     Audit-Id: c0a23195-9d56-4146-96ab-ea70fc1a3d3e
	I1002 21:46:23.798312 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:23.798329 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:23.798336 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:23.798343 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:23.798349 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:23.798606 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:23.799004 1112716 pod_ready.go:92] pod "kube-proxy-9slzp" in "kube-system" namespace has status "Ready":"True"
	I1002 21:46:23.799020 1112716 pod_ready.go:81] duration metric: took 399.718359ms waiting for pod "kube-proxy-9slzp" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.799035 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:23.996434 1112716 request.go:629] Waited for 197.325849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629060
	I1002 21:46:23.996521 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629060
	I1002 21:46:23.996530 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:23.996539 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:23.996546 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:24.019875 1112716 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I1002 21:46:24.019900 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:24.019910 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:23 GMT
	I1002 21:46:24.019917 1112716 round_trippers.go:580]     Audit-Id: 4f2136ed-a046-413e-9a88-95dc41df5d58
	I1002 21:46:24.019923 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:24.019930 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:24.019936 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:24.019943 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:24.020066 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629060","namespace":"kube-system","uid":"7f387fbf-48ab-4405-bfc8-4141f1f993e4","resourceVersion":"294","creationTimestamp":"2023-10-02T21:46:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"41692589b49939f3e56032494ee733e3","kubernetes.io/config.mirror":"41692589b49939f3e56032494ee733e3","kubernetes.io/config.seen":"2023-10-02T21:46:05.978611372Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1002 21:46:24.195673 1112716 request.go:629] Waited for 175.18446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:24.195732 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:46:24.195738 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:24.195747 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:24.195758 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:24.198180 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:24.198205 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:24.198214 1112716 round_trippers.go:580]     Audit-Id: 401c646b-ed8e-42d2-89ed-01934b7881c0
	I1002 21:46:24.198221 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:24.198227 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:24.198234 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:24.198240 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:24.198246 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:24 GMT
	I1002 21:46:24.198407 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:46:24.198809 1112716 pod_ready.go:92] pod "kube-scheduler-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:46:24.198825 1112716 pod_ready.go:81] duration metric: took 399.775992ms waiting for pod "kube-scheduler-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:46:24.198839 1112716 pod_ready.go:38] duration metric: took 2.400055875s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:46:24.198859 1112716 api_server.go:52] waiting for apiserver process to appear ...
	I1002 21:46:24.198921 1112716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:46:24.211490 1112716 command_runner.go:130] > 1270
	I1002 21:46:24.211524 1112716 api_server.go:72] duration metric: took 5.467085292s to wait for apiserver process to appear ...
	I1002 21:46:24.211535 1112716 api_server.go:88] waiting for apiserver healthz status ...
	I1002 21:46:24.211552 1112716 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 21:46:24.221722 1112716 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 21:46:24.221792 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1002 21:46:24.221802 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:24.221812 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:24.221819 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:24.222955 1112716 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1002 21:46:24.223001 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:24.223022 1112716 round_trippers.go:580]     Content-Length: 263
	I1002 21:46:24.223044 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:24 GMT
	I1002 21:46:24.223082 1112716 round_trippers.go:580]     Audit-Id: ca984e79-2ec9-4a6c-a1c1-867e6b4365ea
	I1002 21:46:24.223106 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:24.223128 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:24.223149 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:24.223180 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:24.224173 1112716 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1002 21:46:24.224284 1112716 api_server.go:141] control plane version: v1.28.2
	I1002 21:46:24.224308 1112716 api_server.go:131] duration metric: took 12.766504ms to wait for apiserver health ...
	I1002 21:46:24.224316 1112716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 21:46:24.395652 1112716 request.go:629] Waited for 171.261899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:46:24.395740 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:46:24.395753 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:24.395763 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:24.395777 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:24.399582 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:24.399657 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:24.399680 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:24.399702 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:24.399741 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:24.399755 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:24.399764 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:24 GMT
	I1002 21:46:24.399770 1112716 round_trippers.go:580]     Audit-Id: 143d80e7-fa54-4f89-a660-f33dba3befc0
	I1002 21:46:24.400233 1112716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"408","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1002 21:46:24.402622 1112716 system_pods.go:59] 8 kube-system pods found
	I1002 21:46:24.402663 1112716 system_pods.go:61] "coredns-5dd5756b68-5vhnn" [a90c4a73-8d8d-4bec-832b-c009f3c3bcbb] Running
	I1002 21:46:24.402670 1112716 system_pods.go:61] "etcd-multinode-629060" [6bb8beb8-c1c5-4b2c-9a6e-1b00db71d13a] Running
	I1002 21:46:24.402676 1112716 system_pods.go:61] "kindnet-v68mp" [c073b51c-b148-4045-af7a-2af9e00ab1cf] Running
	I1002 21:46:24.402681 1112716 system_pods.go:61] "kube-apiserver-multinode-629060" [6a9fbd26-ddd8-4dcc-9a48-217bfab74392] Running
	I1002 21:46:24.402687 1112716 system_pods.go:61] "kube-controller-manager-multinode-629060" [7477711d-6adc-4851-994e-3d41d599f050] Running
	I1002 21:46:24.402691 1112716 system_pods.go:61] "kube-proxy-9slzp" [053392fd-91ec-4cc0-98c3-d35660bbe40b] Running
	I1002 21:46:24.402701 1112716 system_pods.go:61] "kube-scheduler-multinode-629060" [7f387fbf-48ab-4405-bfc8-4141f1f993e4] Running
	I1002 21:46:24.402706 1112716 system_pods.go:61] "storage-provisioner" [9880c22d-cac3-49f1-b888-048e6bb56999] Running
	I1002 21:46:24.402713 1112716 system_pods.go:74] duration metric: took 178.385373ms to wait for pod list to return data ...
	I1002 21:46:24.402733 1112716 default_sa.go:34] waiting for default service account to be created ...
	I1002 21:46:24.596175 1112716 request.go:629] Waited for 193.345598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 21:46:24.596238 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1002 21:46:24.596244 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:24.596254 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:24.596262 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:24.598842 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:24.598881 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:24.598891 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:24 GMT
	I1002 21:46:24.598898 1112716 round_trippers.go:580]     Audit-Id: dd910d34-17fa-4909-be6d-b163b6593490
	I1002 21:46:24.598904 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:24.598911 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:24.598917 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:24.598923 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:24.598929 1112716 round_trippers.go:580]     Content-Length: 261
	I1002 21:46:24.598950 1112716 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2d854e8c-ce17-47b0-bbdc-c59b2d418048","resourceVersion":"314","creationTimestamp":"2023-10-02T21:46:18Z"}}]}
	I1002 21:46:24.599153 1112716 default_sa.go:45] found service account: "default"
	I1002 21:46:24.599171 1112716 default_sa.go:55] duration metric: took 196.431821ms for default service account to be created ...
	I1002 21:46:24.599181 1112716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 21:46:24.796594 1112716 request.go:629] Waited for 197.350357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:46:24.796674 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:46:24.796686 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:24.796695 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:24.796707 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:24.800498 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:24.800534 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:24.800544 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:24.800551 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:24 GMT
	I1002 21:46:24.800557 1112716 round_trippers.go:580]     Audit-Id: e6438026-db88-4d8f-a0ae-b0c3f38da264
	I1002 21:46:24.800564 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:24.800570 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:24.800577 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:24.801550 1112716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"408","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1002 21:46:24.804147 1112716 system_pods.go:86] 8 kube-system pods found
	I1002 21:46:24.804179 1112716 system_pods.go:89] "coredns-5dd5756b68-5vhnn" [a90c4a73-8d8d-4bec-832b-c009f3c3bcbb] Running
	I1002 21:46:24.804186 1112716 system_pods.go:89] "etcd-multinode-629060" [6bb8beb8-c1c5-4b2c-9a6e-1b00db71d13a] Running
	I1002 21:46:24.804192 1112716 system_pods.go:89] "kindnet-v68mp" [c073b51c-b148-4045-af7a-2af9e00ab1cf] Running
	I1002 21:46:24.804197 1112716 system_pods.go:89] "kube-apiserver-multinode-629060" [6a9fbd26-ddd8-4dcc-9a48-217bfab74392] Running
	I1002 21:46:24.804203 1112716 system_pods.go:89] "kube-controller-manager-multinode-629060" [7477711d-6adc-4851-994e-3d41d599f050] Running
	I1002 21:46:24.804210 1112716 system_pods.go:89] "kube-proxy-9slzp" [053392fd-91ec-4cc0-98c3-d35660bbe40b] Running
	I1002 21:46:24.804215 1112716 system_pods.go:89] "kube-scheduler-multinode-629060" [7f387fbf-48ab-4405-bfc8-4141f1f993e4] Running
	I1002 21:46:24.804220 1112716 system_pods.go:89] "storage-provisioner" [9880c22d-cac3-49f1-b888-048e6bb56999] Running
	I1002 21:46:24.804233 1112716 system_pods.go:126] duration metric: took 205.046957ms to wait for k8s-apps to be running ...
	I1002 21:46:24.804243 1112716 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:46:24.804310 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:46:24.822017 1112716 system_svc.go:56] duration metric: took 17.763693ms WaitForService to wait for kubelet.
	I1002 21:46:24.822045 1112716 kubeadm.go:581] duration metric: took 6.077606171s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 21:46:24.822065 1112716 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:46:24.996326 1112716 request.go:629] Waited for 174.187992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 21:46:24.996385 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 21:46:24.996391 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:24.996400 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:24.996412 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:24.999381 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:24.999458 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:24.999491 1112716 round_trippers.go:580]     Audit-Id: 00bdeffd-8710-4ef3-8dc9-93e2e0824f34
	I1002 21:46:24.999512 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:24.999548 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:24.999574 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:24.999588 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:24.999595 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:24 GMT
	I1002 21:46:24.999729 1112716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1002 21:46:25.000246 1112716 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:46:25.000277 1112716 node_conditions.go:123] node cpu capacity is 2
	I1002 21:46:25.000291 1112716 node_conditions.go:105] duration metric: took 178.220426ms to run NodePressure ...
	I1002 21:46:25.000303 1112716 start.go:228] waiting for startup goroutines ...
	I1002 21:46:25.000309 1112716 start.go:233] waiting for cluster config update ...
	I1002 21:46:25.000320 1112716 start.go:242] writing updated cluster config ...
	I1002 21:46:25.005550 1112716 out.go:177] 
	I1002 21:46:25.008258 1112716 config.go:182] Loaded profile config "multinode-629060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:46:25.008383 1112716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/config.json ...
	I1002 21:46:25.011030 1112716 out.go:177] * Starting worker node multinode-629060-m02 in cluster multinode-629060
	I1002 21:46:25.012935 1112716 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:46:25.014987 1112716 out.go:177] * Pulling base image ...
	I1002 21:46:25.017649 1112716 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:46:25.017690 1112716 cache.go:57] Caching tarball of preloaded images
	I1002 21:46:25.017696 1112716 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 21:46:25.017792 1112716 preload.go:174] Found /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:46:25.017804 1112716 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 21:46:25.017913 1112716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/config.json ...
	I1002 21:46:25.046446 1112716 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 21:46:25.046486 1112716 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 21:46:25.046536 1112716 cache.go:195] Successfully downloaded all kic artifacts
	I1002 21:46:25.046575 1112716 start.go:365] acquiring machines lock for multinode-629060-m02: {Name:mk69d59ee1c5dd66094582552b7358da686e545d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:46:25.046704 1112716 start.go:369] acquired machines lock for "multinode-629060-m02" in 102.67µs
	I1002 21:46:25.046737 1112716 start.go:93] Provisioning new machine with config: &{Name:multinode-629060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-629060 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 21:46:25.046817 1112716 start.go:125] createHost starting for "m02" (driver="docker")
	I1002 21:46:25.050573 1112716 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 21:46:25.050705 1112716 start.go:159] libmachine.API.Create for "multinode-629060" (driver="docker")
	I1002 21:46:25.050740 1112716 client.go:168] LocalClient.Create starting
	I1002 21:46:25.050818 1112716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem
	I1002 21:46:25.050857 1112716 main.go:141] libmachine: Decoding PEM data...
	I1002 21:46:25.050876 1112716 main.go:141] libmachine: Parsing certificate...
	I1002 21:46:25.050929 1112716 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem
	I1002 21:46:25.050949 1112716 main.go:141] libmachine: Decoding PEM data...
	I1002 21:46:25.050959 1112716 main.go:141] libmachine: Parsing certificate...
	I1002 21:46:25.051210 1112716 cli_runner.go:164] Run: docker network inspect multinode-629060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:46:25.069282 1112716 network_create.go:77] Found existing network {name:multinode-629060 subnet:0x4000c03980 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1002 21:46:25.069343 1112716 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-629060-m02" container
	I1002 21:46:25.069423 1112716 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:46:25.091846 1112716 cli_runner.go:164] Run: docker volume create multinode-629060-m02 --label name.minikube.sigs.k8s.io=multinode-629060-m02 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:46:25.111616 1112716 oci.go:103] Successfully created a docker volume multinode-629060-m02
	I1002 21:46:25.111709 1112716 cli_runner.go:164] Run: docker run --rm --name multinode-629060-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-629060-m02 --entrypoint /usr/bin/test -v multinode-629060-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I1002 21:46:25.706865 1112716 oci.go:107] Successfully prepared a docker volume multinode-629060-m02
	I1002 21:46:25.706910 1112716 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:46:25.706931 1112716 kic.go:190] Starting extracting preloaded images to volume ...
	I1002 21:46:25.707026 1112716 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-629060-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:46:30.054894 1112716 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-629060-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.347821448s)
	I1002 21:46:30.054939 1112716 kic.go:199] duration metric: took 4.347999 seconds to extract preloaded images to volume
	W1002 21:46:30.055090 1112716 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 21:46:30.055283 1112716 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:46:30.130156 1112716 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-629060-m02 --name multinode-629060-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-629060-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-629060-m02 --network multinode-629060 --ip 192.168.58.3 --volume multinode-629060-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I1002 21:46:30.514442 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060-m02 --format={{.State.Running}}
	I1002 21:46:30.537403 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060-m02 --format={{.State.Status}}
	I1002 21:46:30.568227 1112716 cli_runner.go:164] Run: docker exec multinode-629060-m02 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:46:30.666914 1112716 oci.go:144] the created container "multinode-629060-m02" has a running status.
	I1002 21:46:30.666942 1112716 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa...
	I1002 21:46:31.279729 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:46:31.279772 1112716 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:46:31.324674 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060-m02 --format={{.State.Status}}
	I1002 21:46:31.351680 1112716 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:46:31.351713 1112716 kic_runner.go:114] Args: [docker exec --privileged multinode-629060-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:46:31.457423 1112716 cli_runner.go:164] Run: docker container inspect multinode-629060-m02 --format={{.State.Status}}
	I1002 21:46:31.491006 1112716 machine.go:88] provisioning docker machine ...
	I1002 21:46:31.491036 1112716 ubuntu.go:169] provisioning hostname "multinode-629060-m02"
	I1002 21:46:31.491109 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:31.529612 1112716 main.go:141] libmachine: Using SSH client type: native
	I1002 21:46:31.530033 1112716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1002 21:46:31.530045 1112716 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-629060-m02 && echo "multinode-629060-m02" | sudo tee /etc/hostname
	I1002 21:46:31.707577 1112716 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-629060-m02
	
	I1002 21:46:31.707730 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:31.736662 1112716 main.go:141] libmachine: Using SSH client type: native
	I1002 21:46:31.737056 1112716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1002 21:46:31.737074 1112716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-629060-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-629060-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-629060-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:46:31.885980 1112716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:46:31.886010 1112716 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 21:46:31.886027 1112716 ubuntu.go:177] setting up certificates
	I1002 21:46:31.886036 1112716 provision.go:83] configureAuth start
	I1002 21:46:31.886107 1112716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060-m02
	I1002 21:46:31.920533 1112716 provision.go:138] copyHostCerts
	I1002 21:46:31.920576 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 21:46:31.920609 1112716 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 21:46:31.920616 1112716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 21:46:31.920693 1112716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 21:46:31.920796 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 21:46:31.920813 1112716 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 21:46:31.920816 1112716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 21:46:31.920843 1112716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 21:46:31.920881 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 21:46:31.920896 1112716 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 21:46:31.920900 1112716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 21:46:31.920922 1112716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 21:46:31.920962 1112716 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.multinode-629060-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-629060-m02]
	I1002 21:46:32.199436 1112716 provision.go:172] copyRemoteCerts
	I1002 21:46:32.199509 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:46:32.199554 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:32.217811 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa Username:docker}
	I1002 21:46:32.320875 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:46:32.320938 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:46:32.371448 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:46:32.371515 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1002 21:46:32.404186 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:46:32.404297 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:46:32.435792 1112716 provision.go:86] duration metric: configureAuth took 549.73823ms
	I1002 21:46:32.435865 1112716 ubuntu.go:193] setting minikube options for container-runtime
	I1002 21:46:32.436089 1112716 config.go:182] Loaded profile config "multinode-629060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:46:32.436225 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:32.454995 1112716 main.go:141] libmachine: Using SSH client type: native
	I1002 21:46:32.455423 1112716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33815 <nil> <nil>}
	I1002 21:46:32.455444 1112716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:46:32.737908 1112716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:46:32.737931 1112716 machine.go:91] provisioned docker machine in 1.246904365s
	I1002 21:46:32.737950 1112716 client.go:171] LocalClient.Create took 7.687200622s
	I1002 21:46:32.737967 1112716 start.go:167] duration metric: libmachine.API.Create for "multinode-629060" took 7.68726431s
	I1002 21:46:32.737989 1112716 start.go:300] post-start starting for "multinode-629060-m02" (driver="docker")
	I1002 21:46:32.738002 1112716 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:46:32.738157 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:46:32.738230 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:32.759856 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa Username:docker}
	I1002 21:46:32.860786 1112716 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:46:32.865073 1112716 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1002 21:46:32.865096 1112716 command_runner.go:130] > NAME="Ubuntu"
	I1002 21:46:32.865104 1112716 command_runner.go:130] > VERSION_ID="22.04"
	I1002 21:46:32.865111 1112716 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1002 21:46:32.865117 1112716 command_runner.go:130] > VERSION_CODENAME=jammy
	I1002 21:46:32.865122 1112716 command_runner.go:130] > ID=ubuntu
	I1002 21:46:32.865126 1112716 command_runner.go:130] > ID_LIKE=debian
	I1002 21:46:32.865132 1112716 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1002 21:46:32.865139 1112716 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1002 21:46:32.865148 1112716 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1002 21:46:32.865160 1112716 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1002 21:46:32.865166 1112716 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1002 21:46:32.865269 1112716 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:46:32.865301 1112716 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 21:46:32.865315 1112716 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 21:46:32.865323 1112716 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 21:46:32.865336 1112716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 21:46:32.865398 1112716 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 21:46:32.865482 1112716 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 21:46:32.865493 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> /etc/ssl/certs/10477322.pem
	I1002 21:46:32.865596 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:46:32.876501 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:46:32.908712 1112716 start.go:303] post-start completed in 170.703297ms
	I1002 21:46:32.909152 1112716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060-m02
	I1002 21:46:32.931000 1112716 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/config.json ...
	I1002 21:46:32.931298 1112716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:46:32.931347 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:32.949609 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa Username:docker}
	I1002 21:46:33.044229 1112716 command_runner.go:130] > 11%!
	(MISSING)I1002 21:46:33.044328 1112716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:46:33.050411 1112716 command_runner.go:130] > 173G
	I1002 21:46:33.051019 1112716 start.go:128] duration metric: createHost completed in 8.004186178s
	I1002 21:46:33.051085 1112716 start.go:83] releasing machines lock for "multinode-629060-m02", held for 8.004363269s
	I1002 21:46:33.051205 1112716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060-m02
	I1002 21:46:33.073158 1112716 out.go:177] * Found network options:
	I1002 21:46:33.075307 1112716 out.go:177]   - NO_PROXY=192.168.58.2
	W1002 21:46:33.077528 1112716 proxy.go:119] fail to check proxy env: Error ip not in block
	W1002 21:46:33.077579 1112716 proxy.go:119] fail to check proxy env: Error ip not in block
	I1002 21:46:33.077655 1112716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:46:33.077768 1112716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:46:33.077834 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:33.078032 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:46:33.105309 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa Username:docker}
	I1002 21:46:33.106299 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa Username:docker}
	I1002 21:46:33.359005 1112716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 21:46:33.359087 1112716 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 21:46:33.364706 1112716 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1002 21:46:33.364732 1112716 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1002 21:46:33.364740 1112716 command_runner.go:130] > Device: b3h/179d	Inode: 1568809     Links: 1
	I1002 21:46:33.364748 1112716 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 21:46:33.364755 1112716 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1002 21:46:33.364761 1112716 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1002 21:46:33.364768 1112716 command_runner.go:130] > Change: 2023-10-02 21:23:08.253134165 +0000
	I1002 21:46:33.364776 1112716 command_runner.go:130] >  Birth: 2023-10-02 21:23:08.253134165 +0000
	I1002 21:46:33.365186 1112716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:46:33.390066 1112716 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 21:46:33.390162 1112716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:46:33.431613 1112716 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1002 21:46:33.431645 1112716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1002 21:46:33.431654 1112716 start.go:469] detecting cgroup driver to use...
	I1002 21:46:33.431684 1112716 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 21:46:33.431738 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:46:33.451827 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:46:33.466727 1112716 docker.go:197] disabling cri-docker service (if available) ...
	I1002 21:46:33.466795 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:46:33.483736 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:46:33.502661 1112716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:46:33.618778 1112716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:46:33.734864 1112716 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1002 21:46:33.734933 1112716 docker.go:213] disabling docker service ...
	I1002 21:46:33.735019 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:46:33.758467 1112716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:46:33.773035 1112716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:46:33.871243 1112716 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1002 21:46:33.871315 1112716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:46:33.978218 1112716 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1002 21:46:33.978290 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:46:33.993047 1112716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:46:34.017479 1112716 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 21:46:34.018823 1112716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 21:46:34.018889 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:46:34.031023 1112716 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:46:34.031095 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:46:34.043995 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:46:34.056074 1112716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:46:34.070295 1112716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:46:34.083693 1112716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:46:34.093008 1112716 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 21:46:34.094160 1112716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:46:34.104939 1112716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:46:34.195502 1112716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:46:34.329333 1112716 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:46:34.329407 1112716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:46:34.334366 1112716 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 21:46:34.334391 1112716 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 21:46:34.334400 1112716 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1002 21:46:34.334408 1112716 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 21:46:34.334414 1112716 command_runner.go:130] > Access: 2023-10-02 21:46:34.315031419 +0000
	I1002 21:46:34.334421 1112716 command_runner.go:130] > Modify: 2023-10-02 21:46:34.315031419 +0000
	I1002 21:46:34.334427 1112716 command_runner.go:130] > Change: 2023-10-02 21:46:34.315031419 +0000
	I1002 21:46:34.334432 1112716 command_runner.go:130] >  Birth: -
	I1002 21:46:34.334667 1112716 start.go:537] Will wait 60s for crictl version
	I1002 21:46:34.334727 1112716 ssh_runner.go:195] Run: which crictl
	I1002 21:46:34.338697 1112716 command_runner.go:130] > /usr/bin/crictl
	I1002 21:46:34.339124 1112716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:46:34.384470 1112716 command_runner.go:130] > Version:  0.1.0
	I1002 21:46:34.384715 1112716 command_runner.go:130] > RuntimeName:  cri-o
	I1002 21:46:34.384909 1112716 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1002 21:46:34.385088 1112716 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 21:46:34.387913 1112716 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 21:46:34.387995 1112716 ssh_runner.go:195] Run: crio --version
	I1002 21:46:34.432362 1112716 command_runner.go:130] > crio version 1.24.6
	I1002 21:46:34.432384 1112716 command_runner.go:130] > Version:          1.24.6
	I1002 21:46:34.432395 1112716 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 21:46:34.432400 1112716 command_runner.go:130] > GitTreeState:     clean
	I1002 21:46:34.432409 1112716 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 21:46:34.432416 1112716 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 21:46:34.432421 1112716 command_runner.go:130] > Compiler:         gc
	I1002 21:46:34.432427 1112716 command_runner.go:130] > Platform:         linux/arm64
	I1002 21:46:34.432437 1112716 command_runner.go:130] > Linkmode:         dynamic
	I1002 21:46:34.432447 1112716 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 21:46:34.432456 1112716 command_runner.go:130] > SeccompEnabled:   true
	I1002 21:46:34.432461 1112716 command_runner.go:130] > AppArmorEnabled:  false
	I1002 21:46:34.434727 1112716 ssh_runner.go:195] Run: crio --version
	I1002 21:46:34.483242 1112716 command_runner.go:130] > crio version 1.24.6
	I1002 21:46:34.483264 1112716 command_runner.go:130] > Version:          1.24.6
	I1002 21:46:34.483273 1112716 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1002 21:46:34.483279 1112716 command_runner.go:130] > GitTreeState:     clean
	I1002 21:46:34.483286 1112716 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1002 21:46:34.483300 1112716 command_runner.go:130] > GoVersion:        go1.18.2
	I1002 21:46:34.483305 1112716 command_runner.go:130] > Compiler:         gc
	I1002 21:46:34.483314 1112716 command_runner.go:130] > Platform:         linux/arm64
	I1002 21:46:34.483320 1112716 command_runner.go:130] > Linkmode:         dynamic
	I1002 21:46:34.483332 1112716 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1002 21:46:34.483338 1112716 command_runner.go:130] > SeccompEnabled:   true
	I1002 21:46:34.483346 1112716 command_runner.go:130] > AppArmorEnabled:  false
	I1002 21:46:34.488306 1112716 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 21:46:34.490457 1112716 out.go:177]   - env NO_PROXY=192.168.58.2
	I1002 21:46:34.492642 1112716 cli_runner.go:164] Run: docker network inspect multinode-629060 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:46:34.521418 1112716 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 21:46:34.527370 1112716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:46:34.541419 1112716 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060 for IP: 192.168.58.3
	I1002 21:46:34.541448 1112716 certs.go:190] acquiring lock for shared ca certs: {Name:mk89a4b04b53a0a6e55cb9c88355018fadb8a1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:46:34.541606 1112716 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key
	I1002 21:46:34.541651 1112716 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key
	I1002 21:46:34.541662 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:46:34.541690 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:46:34.541702 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:46:34.541713 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:46:34.541768 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem (1338 bytes)
	W1002 21:46:34.541801 1112716 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732_empty.pem, impossibly tiny 0 bytes
	I1002 21:46:34.541809 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:46:34.541834 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:46:34.541859 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:46:34.541882 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem (1679 bytes)
	I1002 21:46:34.541927 1112716 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:46:34.541956 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem -> /usr/share/ca-certificates/1047732.pem
	I1002 21:46:34.541968 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> /usr/share/ca-certificates/10477322.pem
	I1002 21:46:34.541979 1112716 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:46:34.542313 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:46:34.571468 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:46:34.600692 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:46:34.629491 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:46:34.658755 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem --> /usr/share/ca-certificates/1047732.pem (1338 bytes)
	I1002 21:46:34.688253 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /usr/share/ca-certificates/10477322.pem (1708 bytes)
	I1002 21:46:34.717581 1112716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:46:34.747414 1112716 ssh_runner.go:195] Run: openssl version
	I1002 21:46:34.754208 1112716 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1002 21:46:34.754618 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1047732.pem && ln -fs /usr/share/ca-certificates/1047732.pem /etc/ssl/certs/1047732.pem"
	I1002 21:46:34.766554 1112716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1047732.pem
	I1002 21:46:34.771266 1112716 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 21:46:34.771358 1112716 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 21:46:34.771452 1112716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1047732.pem
	I1002 21:46:34.780065 1112716 command_runner.go:130] > 51391683
	I1002 21:46:34.780485 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1047732.pem /etc/ssl/certs/51391683.0"
	I1002 21:46:34.792632 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10477322.pem && ln -fs /usr/share/ca-certificates/10477322.pem /etc/ssl/certs/10477322.pem"
	I1002 21:46:34.804764 1112716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10477322.pem
	I1002 21:46:34.809594 1112716 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 21:46:34.809628 1112716 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 21:46:34.809679 1112716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10477322.pem
	I1002 21:46:34.817901 1112716 command_runner.go:130] > 3ec20f2e
	I1002 21:46:34.818332 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10477322.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:46:34.830691 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:46:34.842683 1112716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:46:34.847768 1112716 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:46:34.848096 1112716 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:46:34.848162 1112716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:46:34.857056 1112716 command_runner.go:130] > b5213941
	I1002 21:46:34.857536 1112716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:46:34.869385 1112716 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 21:46:34.873915 1112716 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 21:46:34.873957 1112716 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1002 21:46:34.874063 1112716 ssh_runner.go:195] Run: crio config
	I1002 21:46:34.940640 1112716 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 21:46:34.940675 1112716 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 21:46:34.940685 1112716 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 21:46:34.940689 1112716 command_runner.go:130] > #
	I1002 21:46:34.940698 1112716 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 21:46:34.940709 1112716 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 21:46:34.940720 1112716 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 21:46:34.940750 1112716 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 21:46:34.940764 1112716 command_runner.go:130] > # reload'.
	I1002 21:46:34.940773 1112716 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 21:46:34.940787 1112716 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 21:46:34.940811 1112716 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 21:46:34.940823 1112716 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 21:46:34.940827 1112716 command_runner.go:130] > [crio]
	I1002 21:46:34.940835 1112716 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 21:46:34.940847 1112716 command_runner.go:130] > # containers images, in this directory.
	I1002 21:46:34.940857 1112716 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 21:46:34.940868 1112716 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 21:46:34.940875 1112716 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1002 21:46:34.940882 1112716 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 21:46:34.940892 1112716 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 21:46:34.941200 1112716 command_runner.go:130] > # storage_driver = "vfs"
	I1002 21:46:34.941238 1112716 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 21:46:34.941250 1112716 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 21:46:34.941275 1112716 command_runner.go:130] > # storage_option = [
	I1002 21:46:34.941628 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.941648 1112716 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 21:46:34.941659 1112716 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 21:46:34.941672 1112716 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 21:46:34.941683 1112716 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 21:46:34.941691 1112716 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 21:46:34.941700 1112716 command_runner.go:130] > # always happen on a node reboot
	I1002 21:46:34.941707 1112716 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 21:46:34.941726 1112716 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 21:46:34.941744 1112716 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 21:46:34.941760 1112716 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 21:46:34.941773 1112716 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1002 21:46:34.941784 1112716 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 21:46:34.941807 1112716 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 21:46:34.941818 1112716 command_runner.go:130] > # internal_wipe = true
	I1002 21:46:34.941832 1112716 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 21:46:34.941845 1112716 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 21:46:34.941859 1112716 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 21:46:34.941870 1112716 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 21:46:34.941878 1112716 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 21:46:34.941886 1112716 command_runner.go:130] > [crio.api]
	I1002 21:46:34.941896 1112716 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 21:46:34.941908 1112716 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 21:46:34.941916 1112716 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 21:46:34.941921 1112716 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 21:46:34.941930 1112716 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 21:46:34.941941 1112716 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 21:46:34.941947 1112716 command_runner.go:130] > # stream_port = "0"
	I1002 21:46:34.941964 1112716 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 21:46:34.941973 1112716 command_runner.go:130] > # stream_enable_tls = false
	I1002 21:46:34.941982 1112716 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 21:46:34.941994 1112716 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 21:46:34.942003 1112716 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 21:46:34.942017 1112716 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1002 21:46:34.942021 1112716 command_runner.go:130] > # minutes.
	I1002 21:46:34.942029 1112716 command_runner.go:130] > # stream_tls_cert = ""
	I1002 21:46:34.942053 1112716 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 21:46:34.942075 1112716 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1002 21:46:34.942085 1112716 command_runner.go:130] > # stream_tls_key = ""
	I1002 21:46:34.942105 1112716 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 21:46:34.942119 1112716 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 21:46:34.942135 1112716 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1002 21:46:34.942145 1112716 command_runner.go:130] > # stream_tls_ca = ""
	I1002 21:46:34.942157 1112716 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 21:46:34.942167 1112716 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 21:46:34.942179 1112716 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1002 21:46:34.942191 1112716 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 21:46:34.942233 1112716 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 21:46:34.942249 1112716 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 21:46:34.942258 1112716 command_runner.go:130] > [crio.runtime]
	I1002 21:46:34.942274 1112716 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 21:46:34.942287 1112716 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 21:46:34.942297 1112716 command_runner.go:130] > # "nofile=1024:2048"
	I1002 21:46:34.942312 1112716 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 21:46:34.942322 1112716 command_runner.go:130] > # default_ulimits = [
	I1002 21:46:34.942330 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.942346 1112716 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 21:46:34.942351 1112716 command_runner.go:130] > # no_pivot = false
	I1002 21:46:34.942363 1112716 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 21:46:34.942372 1112716 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 21:46:34.942384 1112716 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 21:46:34.942398 1112716 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 21:46:34.942409 1112716 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 21:46:34.942423 1112716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 21:46:34.942431 1112716 command_runner.go:130] > # conmon = ""
	I1002 21:46:34.942441 1112716 command_runner.go:130] > # Cgroup setting for conmon
	I1002 21:46:34.942452 1112716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 21:46:34.942461 1112716 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 21:46:34.942473 1112716 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 21:46:34.942490 1112716 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 21:46:34.942507 1112716 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 21:46:34.942526 1112716 command_runner.go:130] > # conmon_env = [
	I1002 21:46:34.942531 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.942544 1112716 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 21:46:34.942554 1112716 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 21:46:34.942561 1112716 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 21:46:34.942566 1112716 command_runner.go:130] > # default_env = [
	I1002 21:46:34.942571 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.942580 1112716 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 21:46:34.942588 1112716 command_runner.go:130] > # selinux = false
	I1002 21:46:34.942599 1112716 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 21:46:34.942607 1112716 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1002 21:46:34.942620 1112716 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1002 21:46:34.942626 1112716 command_runner.go:130] > # seccomp_profile = ""
	I1002 21:46:34.942639 1112716 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1002 21:46:34.942650 1112716 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1002 21:46:34.942661 1112716 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1002 21:46:34.942668 1112716 command_runner.go:130] > # which might increase security.
	I1002 21:46:34.942677 1112716 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1002 21:46:34.942690 1112716 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 21:46:34.942698 1112716 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 21:46:34.942714 1112716 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 21:46:34.942729 1112716 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 21:46:34.942737 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:46:34.942748 1112716 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 21:46:34.942756 1112716 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 21:46:34.942762 1112716 command_runner.go:130] > # the cgroup blockio controller.
	I1002 21:46:34.942769 1112716 command_runner.go:130] > # blockio_config_file = ""
	I1002 21:46:34.942786 1112716 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 21:46:34.942795 1112716 command_runner.go:130] > # irqbalance daemon.
	I1002 21:46:34.942809 1112716 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 21:46:34.942821 1112716 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 21:46:34.942828 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:46:34.942836 1112716 command_runner.go:130] > # rdt_config_file = ""
	I1002 21:46:34.942844 1112716 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 21:46:34.942849 1112716 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1002 21:46:34.942857 1112716 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 21:46:34.943199 1112716 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 21:46:34.943219 1112716 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 21:46:34.943228 1112716 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 21:46:34.943243 1112716 command_runner.go:130] > # will be added.
	I1002 21:46:34.943252 1112716 command_runner.go:130] > # default_capabilities = [
	I1002 21:46:34.943257 1112716 command_runner.go:130] > # 	"CHOWN",
	I1002 21:46:34.943263 1112716 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 21:46:34.943272 1112716 command_runner.go:130] > # 	"FSETID",
	I1002 21:46:34.943281 1112716 command_runner.go:130] > # 	"FOWNER",
	I1002 21:46:34.943290 1112716 command_runner.go:130] > # 	"SETGID",
	I1002 21:46:34.943295 1112716 command_runner.go:130] > # 	"SETUID",
	I1002 21:46:34.943300 1112716 command_runner.go:130] > # 	"SETPCAP",
	I1002 21:46:34.943316 1112716 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 21:46:34.943321 1112716 command_runner.go:130] > # 	"KILL",
	I1002 21:46:34.943333 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.943346 1112716 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 21:46:34.943359 1112716 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 21:46:34.943366 1112716 command_runner.go:130] > # add_inheritable_capabilities = true
	I1002 21:46:34.943386 1112716 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 21:46:34.943397 1112716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 21:46:34.943403 1112716 command_runner.go:130] > # default_sysctls = [
	I1002 21:46:34.943407 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.943413 1112716 command_runner.go:130] > # List of devices on the host that a
	I1002 21:46:34.943427 1112716 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 21:46:34.943433 1112716 command_runner.go:130] > # allowed_devices = [
	I1002 21:46:34.943439 1112716 command_runner.go:130] > # 	"/dev/fuse",
	I1002 21:46:34.943452 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.943476 1112716 command_runner.go:130] > # List of additional devices. specified as
	I1002 21:46:34.943515 1112716 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 21:46:34.943526 1112716 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 21:46:34.943539 1112716 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 21:46:34.943545 1112716 command_runner.go:130] > # additional_devices = [
	I1002 21:46:34.943556 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.943563 1112716 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 21:46:34.943568 1112716 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 21:46:34.943583 1112716 command_runner.go:130] > # 	"/etc/cdi",
	I1002 21:46:34.943588 1112716 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 21:46:34.943592 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.943600 1112716 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 21:46:34.943610 1112716 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 21:46:34.943622 1112716 command_runner.go:130] > # Defaults to false.
	I1002 21:46:34.943628 1112716 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 21:46:34.943636 1112716 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 21:46:34.943647 1112716 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 21:46:34.943652 1112716 command_runner.go:130] > # hooks_dir = [
	I1002 21:46:34.943662 1112716 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 21:46:34.943670 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.943678 1112716 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 21:46:34.943686 1112716 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 21:46:34.943693 1112716 command_runner.go:130] > # its default mounts from the following two files:
	I1002 21:46:34.943699 1112716 command_runner.go:130] > #
	I1002 21:46:34.943707 1112716 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 21:46:34.943718 1112716 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 21:46:34.943725 1112716 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 21:46:34.943737 1112716 command_runner.go:130] > #
	I1002 21:46:34.943746 1112716 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 21:46:34.943757 1112716 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 21:46:34.943766 1112716 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 21:46:34.943772 1112716 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 21:46:34.943776 1112716 command_runner.go:130] > #
	I1002 21:46:34.943781 1112716 command_runner.go:130] > # default_mounts_file = ""
	I1002 21:46:34.943796 1112716 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 21:46:34.943810 1112716 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 21:46:34.943815 1112716 command_runner.go:130] > # pids_limit = 0
	I1002 21:46:34.943833 1112716 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 21:46:34.943846 1112716 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 21:46:34.943855 1112716 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 21:46:34.943865 1112716 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 21:46:34.943870 1112716 command_runner.go:130] > # log_size_max = -1
	I1002 21:46:34.943879 1112716 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1002 21:46:34.943891 1112716 command_runner.go:130] > # log_to_journald = false
	I1002 21:46:34.943901 1112716 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 21:46:34.943911 1112716 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 21:46:34.943918 1112716 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 21:46:34.943928 1112716 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 21:46:34.943934 1112716 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 21:46:34.943939 1112716 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 21:46:34.943946 1112716 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 21:46:34.944343 1112716 command_runner.go:130] > # read_only = false
	I1002 21:46:34.944367 1112716 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 21:46:34.944381 1112716 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 21:46:34.944387 1112716 command_runner.go:130] > # live configuration reload.
	I1002 21:46:34.944392 1112716 command_runner.go:130] > # log_level = "info"
	I1002 21:46:34.944399 1112716 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 21:46:34.944408 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:46:34.944414 1112716 command_runner.go:130] > # log_filter = ""
	I1002 21:46:34.944424 1112716 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 21:46:34.944435 1112716 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 21:46:34.944445 1112716 command_runner.go:130] > # separated by comma.
	I1002 21:46:34.944454 1112716 command_runner.go:130] > # uid_mappings = ""
	I1002 21:46:34.944461 1112716 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 21:46:34.944474 1112716 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 21:46:34.944479 1112716 command_runner.go:130] > # separated by comma.
	I1002 21:46:34.944487 1112716 command_runner.go:130] > # gid_mappings = ""
	I1002 21:46:34.944495 1112716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 21:46:34.944503 1112716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 21:46:34.944514 1112716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 21:46:34.944520 1112716 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 21:46:34.944530 1112716 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 21:46:34.944538 1112716 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 21:46:34.944545 1112716 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 21:46:34.944551 1112716 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 21:46:34.944562 1112716 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 21:46:34.944573 1112716 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 21:46:34.944580 1112716 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 21:46:34.944588 1112716 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 21:46:34.944595 1112716 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 21:46:34.944605 1112716 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 21:46:34.944612 1112716 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 21:46:34.944619 1112716 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 21:46:34.944624 1112716 command_runner.go:130] > # drop_infra_ctr = true
	I1002 21:46:34.944637 1112716 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 21:46:34.944652 1112716 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 21:46:34.944661 1112716 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 21:46:34.944669 1112716 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 21:46:34.944676 1112716 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 21:46:34.944699 1112716 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 21:46:34.944704 1112716 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 21:46:34.944717 1112716 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 21:46:34.944722 1112716 command_runner.go:130] > # pinns_path = ""
	I1002 21:46:34.944732 1112716 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 21:46:34.944742 1112716 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1002 21:46:34.944752 1112716 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1002 21:46:34.944762 1112716 command_runner.go:130] > # default_runtime = "runc"
	I1002 21:46:34.944771 1112716 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 21:46:34.944787 1112716 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 21:46:34.944800 1112716 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1002 21:46:34.944817 1112716 command_runner.go:130] > # creation as a file is not desired either.
	I1002 21:46:34.944828 1112716 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 21:46:34.944837 1112716 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 21:46:34.944847 1112716 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 21:46:34.944854 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.944862 1112716 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 21:46:34.944870 1112716 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 21:46:34.944878 1112716 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1002 21:46:34.944886 1112716 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1002 21:46:34.944893 1112716 command_runner.go:130] > #
	I1002 21:46:34.944899 1112716 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1002 21:46:34.944905 1112716 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1002 21:46:34.944917 1112716 command_runner.go:130] > #  runtime_type = "oci"
	I1002 21:46:34.944924 1112716 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1002 21:46:34.944938 1112716 command_runner.go:130] > #  privileged_without_host_devices = false
	I1002 21:46:34.944944 1112716 command_runner.go:130] > #  allowed_annotations = []
	I1002 21:46:34.944955 1112716 command_runner.go:130] > # Where:
	I1002 21:46:34.944962 1112716 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1002 21:46:34.944979 1112716 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1002 21:46:34.944987 1112716 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 21:46:34.945000 1112716 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 21:46:34.945005 1112716 command_runner.go:130] > #   in $PATH.
	I1002 21:46:34.945016 1112716 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1002 21:46:34.945022 1112716 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 21:46:34.945052 1112716 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1002 21:46:34.945066 1112716 command_runner.go:130] > #   state.
	I1002 21:46:34.945074 1112716 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 21:46:34.945082 1112716 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 21:46:34.945094 1112716 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 21:46:34.945105 1112716 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 21:46:34.945113 1112716 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 21:46:34.945121 1112716 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 21:46:34.945133 1112716 command_runner.go:130] > #   The currently recognized values are:
	I1002 21:46:34.945144 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 21:46:34.945160 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 21:46:34.945170 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 21:46:34.945177 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 21:46:34.945187 1112716 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 21:46:34.945225 1112716 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 21:46:34.945234 1112716 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 21:46:34.945244 1112716 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1002 21:46:34.945255 1112716 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 21:46:34.945264 1112716 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 21:46:34.945274 1112716 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1002 21:46:34.945279 1112716 command_runner.go:130] > runtime_type = "oci"
	I1002 21:46:34.945285 1112716 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 21:46:34.945293 1112716 command_runner.go:130] > runtime_config_path = ""
	I1002 21:46:34.945299 1112716 command_runner.go:130] > monitor_path = ""
	I1002 21:46:34.945306 1112716 command_runner.go:130] > monitor_cgroup = ""
	I1002 21:46:34.945312 1112716 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 21:46:34.945419 1112716 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1002 21:46:34.945427 1112716 command_runner.go:130] > # running containers
	I1002 21:46:34.945440 1112716 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1002 21:46:34.945448 1112716 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1002 21:46:34.945462 1112716 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1002 21:46:34.945478 1112716 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1002 21:46:34.945488 1112716 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1002 21:46:34.945494 1112716 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1002 21:46:34.945500 1112716 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1002 21:46:34.945505 1112716 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1002 21:46:34.945512 1112716 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1002 21:46:34.945520 1112716 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1002 21:46:34.945534 1112716 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 21:46:34.945549 1112716 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 21:46:34.945557 1112716 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 21:46:34.945570 1112716 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 21:46:34.945580 1112716 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1002 21:46:34.945587 1112716 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 21:46:34.945600 1112716 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 21:46:34.945614 1112716 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 21:46:34.945622 1112716 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 21:46:34.945636 1112716 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 21:46:34.945646 1112716 command_runner.go:130] > # Example:
	I1002 21:46:34.945656 1112716 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 21:46:34.945662 1112716 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 21:46:34.945671 1112716 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 21:46:34.945678 1112716 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 21:46:34.945685 1112716 command_runner.go:130] > # cpuset = 0
	I1002 21:46:34.945690 1112716 command_runner.go:130] > # cpushares = "0-1"
	I1002 21:46:34.945694 1112716 command_runner.go:130] > # Where:
	I1002 21:46:34.945700 1112716 command_runner.go:130] > # The workload name is workload-type.
	I1002 21:46:34.945712 1112716 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 21:46:34.945722 1112716 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 21:46:34.945736 1112716 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 21:46:34.945746 1112716 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 21:46:34.945754 1112716 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 21:46:34.945759 1112716 command_runner.go:130] > # 
	I1002 21:46:34.945768 1112716 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 21:46:34.945774 1112716 command_runner.go:130] > #
	I1002 21:46:34.945782 1112716 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 21:46:34.945789 1112716 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1002 21:46:34.945806 1112716 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1002 21:46:34.945898 1112716 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1002 21:46:34.945912 1112716 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1002 21:46:34.945917 1112716 command_runner.go:130] > [crio.image]
	I1002 21:46:34.945924 1112716 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 21:46:34.945932 1112716 command_runner.go:130] > # default_transport = "docker://"
	I1002 21:46:34.945940 1112716 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 21:46:34.945950 1112716 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 21:46:34.945958 1112716 command_runner.go:130] > # global_auth_file = ""
	I1002 21:46:34.945965 1112716 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 21:46:34.945972 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:46:34.945985 1112716 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1002 21:46:34.945994 1112716 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 21:46:34.946009 1112716 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 21:46:34.946020 1112716 command_runner.go:130] > # This option supports live configuration reload.
	I1002 21:46:34.946029 1112716 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 21:46:34.946039 1112716 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 21:46:34.946050 1112716 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 21:46:34.946066 1112716 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 21:46:34.946076 1112716 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 21:46:34.946088 1112716 command_runner.go:130] > # pause_command = "/pause"
	I1002 21:46:34.946096 1112716 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 21:46:34.946104 1112716 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 21:46:34.946112 1112716 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 21:46:34.946121 1112716 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 21:46:34.946133 1112716 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 21:46:34.946141 1112716 command_runner.go:130] > # signature_policy = ""
	I1002 21:46:34.946149 1112716 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 21:46:34.946163 1112716 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 21:46:34.946171 1112716 command_runner.go:130] > # changing them here.
	I1002 21:46:34.946179 1112716 command_runner.go:130] > # insecure_registries = [
	I1002 21:46:34.946183 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.946191 1112716 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 21:46:34.946198 1112716 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 21:46:34.946208 1112716 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 21:46:34.946220 1112716 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 21:46:34.946231 1112716 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 21:46:34.946243 1112716 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 21:46:34.946248 1112716 command_runner.go:130] > # CNI plugins.
	I1002 21:46:34.946255 1112716 command_runner.go:130] > [crio.network]
	I1002 21:46:34.946262 1112716 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 21:46:34.946269 1112716 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 21:46:34.946275 1112716 command_runner.go:130] > # cni_default_network = ""
	I1002 21:46:34.946282 1112716 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 21:46:34.946293 1112716 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 21:46:34.946300 1112716 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 21:46:34.946308 1112716 command_runner.go:130] > # plugin_dirs = [
	I1002 21:46:34.946321 1112716 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 21:46:34.946326 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.946334 1112716 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 21:46:34.946342 1112716 command_runner.go:130] > [crio.metrics]
	I1002 21:46:34.946348 1112716 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 21:46:34.946630 1112716 command_runner.go:130] > # enable_metrics = false
	I1002 21:46:34.946645 1112716 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 21:46:34.946651 1112716 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 21:46:34.946660 1112716 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 21:46:34.946668 1112716 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 21:46:34.946771 1112716 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 21:46:34.946783 1112716 command_runner.go:130] > # metrics_collectors = [
	I1002 21:46:34.946788 1112716 command_runner.go:130] > # 	"operations",
	I1002 21:46:34.946794 1112716 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1002 21:46:34.946800 1112716 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1002 21:46:34.946805 1112716 command_runner.go:130] > # 	"operations_errors",
	I1002 21:46:34.946852 1112716 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1002 21:46:34.946862 1112716 command_runner.go:130] > # 	"image_pulls_by_name",
	I1002 21:46:34.946871 1112716 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1002 21:46:34.946876 1112716 command_runner.go:130] > # 	"image_pulls_failures",
	I1002 21:46:34.946881 1112716 command_runner.go:130] > # 	"image_pulls_successes",
	I1002 21:46:34.946887 1112716 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 21:46:34.946924 1112716 command_runner.go:130] > # 	"image_layer_reuse",
	I1002 21:46:34.946933 1112716 command_runner.go:130] > # 	"containers_oom_total",
	I1002 21:46:34.946942 1112716 command_runner.go:130] > # 	"containers_oom",
	I1002 21:46:34.946948 1112716 command_runner.go:130] > # 	"processes_defunct",
	I1002 21:46:34.946953 1112716 command_runner.go:130] > # 	"operations_total",
	I1002 21:46:34.946959 1112716 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 21:46:34.946966 1112716 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 21:46:34.946971 1112716 command_runner.go:130] > # 	"operations_errors_total",
	I1002 21:46:34.947008 1112716 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 21:46:34.947016 1112716 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 21:46:34.947022 1112716 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 21:46:34.947028 1112716 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 21:46:34.947034 1112716 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 21:46:34.947039 1112716 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 21:46:34.947044 1112716 command_runner.go:130] > # ]
	I1002 21:46:34.947051 1112716 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 21:46:34.947085 1112716 command_runner.go:130] > # metrics_port = 9090
	I1002 21:46:34.947105 1112716 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 21:46:34.947122 1112716 command_runner.go:130] > # metrics_socket = ""
	I1002 21:46:34.947160 1112716 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 21:46:34.947189 1112716 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 21:46:34.947211 1112716 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 21:46:34.947245 1112716 command_runner.go:130] > # certificate on any modification event.
	I1002 21:46:34.947266 1112716 command_runner.go:130] > # metrics_cert = ""
	I1002 21:46:34.947288 1112716 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 21:46:34.947323 1112716 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 21:46:34.947344 1112716 command_runner.go:130] > # metrics_key = ""
	I1002 21:46:34.947366 1112716 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 21:46:34.947400 1112716 command_runner.go:130] > [crio.tracing]
	I1002 21:46:34.947422 1112716 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 21:46:34.947444 1112716 command_runner.go:130] > # enable_tracing = false
	I1002 21:46:34.947480 1112716 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 21:46:34.947500 1112716 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1002 21:46:34.947523 1112716 command_runner.go:130] > # Number of samples to collect per million spans.
	I1002 21:46:34.947559 1112716 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 21:46:34.947582 1112716 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 21:46:34.947603 1112716 command_runner.go:130] > [crio.stats]
	I1002 21:46:34.947647 1112716 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 21:46:34.947658 1112716 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 21:46:34.947664 1112716 command_runner.go:130] > # stats_collection_period = 0
	I1002 21:46:34.948491 1112716 command_runner.go:130] ! time="2023-10-02 21:46:34.938123488Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1002 21:46:34.948533 1112716 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 21:46:34.948614 1112716 cni.go:84] Creating CNI manager for ""
	I1002 21:46:34.948625 1112716 cni.go:136] 2 nodes found, recommending kindnet
	I1002 21:46:34.948633 1112716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 21:46:34.948655 1112716 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-629060 NodeName:multinode-629060-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:46:34.948783 1112716 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-629060-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:46:34.948843 1112716 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-629060-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-629060 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 21:46:34.948914 1112716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 21:46:34.960171 1112716 command_runner.go:130] > kubeadm
	I1002 21:46:34.960191 1112716 command_runner.go:130] > kubectl
	I1002 21:46:34.960196 1112716 command_runner.go:130] > kubelet
	I1002 21:46:34.960220 1112716 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:46:34.960297 1112716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1002 21:46:34.971285 1112716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1002 21:46:34.993682 1112716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:46:35.018241 1112716 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:46:35.023437 1112716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:46:35.038558 1112716 host.go:66] Checking if "multinode-629060" exists ...
	I1002 21:46:35.038824 1112716 config.go:182] Loaded profile config "multinode-629060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:46:35.038917 1112716 start.go:304] JoinCluster: &{Name:multinode-629060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-629060 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:46:35.039027 1112716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1002 21:46:35.039098 1112716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:46:35.057865 1112716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:46:35.230371 1112716 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0b06lz.fhvzoeqptni8xpk8 --discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 
	I1002 21:46:35.234042 1112716 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 21:46:35.234079 1112716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0b06lz.fhvzoeqptni8xpk8 --discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-629060-m02"
	I1002 21:46:35.280880 1112716 command_runner.go:130] > [preflight] Running pre-flight checks
	I1002 21:46:35.322150 1112716 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:46:35.322171 1112716 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1045-aws
	I1002 21:46:35.322182 1112716 command_runner.go:130] > OS: Linux
	I1002 21:46:35.322189 1112716 command_runner.go:130] > CGROUPS_CPU: enabled
	I1002 21:46:35.322196 1112716 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1002 21:46:35.322203 1112716 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1002 21:46:35.322210 1112716 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1002 21:46:35.322216 1112716 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1002 21:46:35.322222 1112716 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1002 21:46:35.322239 1112716 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1002 21:46:35.322247 1112716 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1002 21:46:35.322253 1112716 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1002 21:46:35.435116 1112716 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1002 21:46:35.435147 1112716 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1002 21:46:35.468523 1112716 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:46:35.468550 1112716 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:46:35.468558 1112716 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1002 21:46:35.571241 1112716 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1002 21:46:38.586103 1112716 command_runner.go:130] > This node has joined the cluster:
	I1002 21:46:38.586127 1112716 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1002 21:46:38.586136 1112716 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1002 21:46:38.586144 1112716 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1002 21:46:38.589377 1112716 command_runner.go:130] ! W1002 21:46:35.280355    1020 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1002 21:46:38.589426 1112716 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1045-aws\n", err: exit status 1
	I1002 21:46:38.589446 1112716 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:46:38.589464 1112716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0b06lz.fhvzoeqptni8xpk8 --discovery-token-ca-cert-hash sha256:d06cdb910bf57b459d6842f992e38a0ba93ae53ce995ef5d38578d43e639f4e9 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-629060-m02": (3.355368074s)
	I1002 21:46:38.589495 1112716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1002 21:46:38.793241 1112716 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1002 21:46:38.793267 1112716 start.go:306] JoinCluster complete in 3.754350429s
	I1002 21:46:38.793277 1112716 cni.go:84] Creating CNI manager for ""
	I1002 21:46:38.793283 1112716 cni.go:136] 2 nodes found, recommending kindnet
	I1002 21:46:38.793341 1112716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 21:46:38.797917 1112716 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1002 21:46:38.797942 1112716 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1002 21:46:38.797950 1112716 command_runner.go:130] > Device: 36h/54d	Inode: 1572688     Links: 1
	I1002 21:46:38.797963 1112716 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 21:46:38.797972 1112716 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1002 21:46:38.797978 1112716 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1002 21:46:38.797984 1112716 command_runner.go:130] > Change: 2023-10-02 21:23:08.933130862 +0000
	I1002 21:46:38.797995 1112716 command_runner.go:130] >  Birth: 2023-10-02 21:23:08.889131076 +0000
	I1002 21:46:38.798031 1112716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 21:46:38.798045 1112716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 21:46:38.821780 1112716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 21:46:39.138729 1112716 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1002 21:46:39.144163 1112716 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1002 21:46:39.147784 1112716 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1002 21:46:39.164655 1112716 command_runner.go:130] > daemonset.apps/kindnet configured
	I1002 21:46:39.170469 1112716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:46:39.170765 1112716 kapi.go:59] client config for multinode-629060: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:46:39.171092 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1002 21:46:39.171105 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:39.171114 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:39.171121 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:39.173680 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:39.173703 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:39.173711 1112716 round_trippers.go:580]     Audit-Id: 041419e1-04d4-4427-bc1d-4e530855805e
	I1002 21:46:39.173719 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:39.173725 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:39.173732 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:39.173738 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:39.173749 1112716 round_trippers.go:580]     Content-Length: 291
	I1002 21:46:39.173756 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:39 GMT
	I1002 21:46:39.173777 1112716 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"308c3efb-883f-4d10-b233-122055076f8b","resourceVersion":"412","creationTimestamp":"2023-10-02T21:46:05Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1002 21:46:39.173868 1112716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-629060" context rescaled to 1 replicas
	I1002 21:46:39.173898 1112716 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1002 21:46:39.177701 1112716 out.go:177] * Verifying Kubernetes components...
	I1002 21:46:39.180146 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:46:39.194555 1112716 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:46:39.194825 1112716 kapi.go:59] client config for multinode-629060: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/multinode-629060/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:46:39.195085 1112716 node_ready.go:35] waiting up to 6m0s for node "multinode-629060-m02" to be "Ready" ...
	I1002 21:46:39.195148 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:39.195154 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:39.195162 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:39.195169 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:39.198028 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:39.198050 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:39.198058 1112716 round_trippers.go:580]     Audit-Id: b791f3ca-c399-4057-9aa0-5e6054950b77
	I1002 21:46:39.198064 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:39.198071 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:39.198078 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:39.198086 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:39.198092 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:39 GMT
	I1002 21:46:39.198238 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:39.198642 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:39.198651 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:39.198658 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:39.198665 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:39.201044 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:39.201062 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:39.201070 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:39.201076 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:39 GMT
	I1002 21:46:39.201082 1112716 round_trippers.go:580]     Audit-Id: d7ea7c36-8673-4b78-9824-b347d1133d61
	I1002 21:46:39.201088 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:39.201094 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:39.201101 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:39.201241 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:39.702319 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:39.702338 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:39.702348 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:39.702355 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:39.704841 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:39.704864 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:39.704875 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:39.704883 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:39.704889 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:39 GMT
	I1002 21:46:39.704895 1112716 round_trippers.go:580]     Audit-Id: d21f9d98-b81c-4d39-ae2e-08b3c779ce78
	I1002 21:46:39.704902 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:39.704908 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:39.705579 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:40.202176 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:40.202203 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:40.202213 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:40.202220 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:40.205087 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:40.205111 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:40.205120 1112716 round_trippers.go:580]     Audit-Id: 1025957b-2ebd-4370-88b6-9696d0ed75a3
	I1002 21:46:40.205126 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:40.205133 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:40.205139 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:40.205145 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:40.205152 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:40 GMT
	I1002 21:46:40.205429 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:40.702314 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:40.702338 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:40.702352 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:40.702360 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:40.704892 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:40.704911 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:40.704919 1112716 round_trippers.go:580]     Audit-Id: 35180fc2-f658-4747-a489-be4dec4e3eb2
	I1002 21:46:40.704926 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:40.704932 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:40.704938 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:40.704944 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:40.704951 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:40 GMT
	I1002 21:46:40.705134 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:41.202451 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:41.202477 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:41.202487 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:41.202495 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:41.205557 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:41.205619 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:41.205642 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:41.205666 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:41.205687 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:41.205714 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:41 GMT
	I1002 21:46:41.205735 1112716 round_trippers.go:580]     Audit-Id: cee4199f-7fd4-4973-9514-c671e40f81fa
	I1002 21:46:41.205758 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:41.205958 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:41.206368 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:41.701833 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:41.701858 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:41.701869 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:41.701877 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:41.705284 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:41.705310 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:41.705319 1112716 round_trippers.go:580]     Audit-Id: c317d730-5146-4e75-b7f8-8ed60b65be4d
	I1002 21:46:41.705326 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:41.705333 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:41.705339 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:41.705345 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:41.705351 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:41 GMT
	I1002 21:46:41.705486 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:42.202480 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:42.202568 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:42.202585 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:42.202602 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:42.205505 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:42.205529 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:42.205537 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:42.205544 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:42.205617 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:42.205629 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:42 GMT
	I1002 21:46:42.205646 1112716 round_trippers.go:580]     Audit-Id: b5fbfbc6-7f65-4db7-bd6d-1e9f43c4ea7d
	I1002 21:46:42.205653 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:42.205748 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:42.702095 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:42.702116 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:42.702126 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:42.702133 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:42.704787 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:42.704806 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:42.704816 1112716 round_trippers.go:580]     Audit-Id: 25831aa6-707d-4a50-9e4f-76c10695873a
	I1002 21:46:42.704822 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:42.704828 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:42.704834 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:42.704840 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:42.704846 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:42 GMT
	I1002 21:46:42.704969 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:43.201777 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:43.201817 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:43.201827 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:43.201835 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:43.204593 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:43.204613 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:43.204621 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:43 GMT
	I1002 21:46:43.204629 1112716 round_trippers.go:580]     Audit-Id: 6378d9fb-e0e6-477c-a13b-69ae388e96d0
	I1002 21:46:43.204635 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:43.204641 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:43.204647 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:43.204654 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:43.204753 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"449","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1002 21:46:43.702439 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:43.702465 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:43.702474 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:43.702482 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:43.705391 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:43.705420 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:43.705439 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:43.705447 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:43.705453 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:43.705460 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:43.705468 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:43 GMT
	I1002 21:46:43.705474 1112716 round_trippers.go:580]     Audit-Id: e91975c5-9a04-47b2-97ba-8883a3fa9c8a
	I1002 21:46:43.705605 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:43.706004 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:44.201861 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:44.201886 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:44.201895 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:44.201908 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:44.204556 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:44.204615 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:44.204637 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:44.204660 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:44.204694 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:44.204723 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:44.204735 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:44 GMT
	I1002 21:46:44.204742 1112716 round_trippers.go:580]     Audit-Id: 777dd6ab-2105-4d36-9815-b6aee031e9b2
	I1002 21:46:44.204860 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:44.702252 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:44.702274 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:44.702285 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:44.702293 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:44.704962 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:44.704986 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:44.704995 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:44.705003 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:44.705009 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:44 GMT
	I1002 21:46:44.705015 1112716 round_trippers.go:580]     Audit-Id: 14d03a76-ff74-4b85-bb98-9e06482be707
	I1002 21:46:44.705025 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:44.705032 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:44.705161 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:45.201808 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:45.201843 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:45.201855 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:45.201863 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:45.204628 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:45.204655 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:45.204667 1112716 round_trippers.go:580]     Audit-Id: 1e48a667-04cb-4afc-b0cc-da88674fb906
	I1002 21:46:45.204674 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:45.204680 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:45.204686 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:45.204693 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:45.204700 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:45 GMT
	I1002 21:46:45.205011 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:45.702672 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:45.702695 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:45.702705 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:45.702712 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:45.705234 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:45.705260 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:45.705270 1112716 round_trippers.go:580]     Audit-Id: 9ef9432a-6bd6-48ea-b485-47dcfd84bd68
	I1002 21:46:45.705277 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:45.705283 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:45.705289 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:45.705299 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:45.705305 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:45 GMT
	I1002 21:46:45.705591 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:46.202143 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:46.202167 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:46.202176 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:46.202184 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:46.204784 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:46.204848 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:46.204870 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:46.204899 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:46.204923 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:46 GMT
	I1002 21:46:46.204945 1112716 round_trippers.go:580]     Audit-Id: 8b50953c-567a-4817-adfb-53b4021703c5
	I1002 21:46:46.204967 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:46.204992 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:46.205108 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:46.205513 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:46.701810 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:46.701832 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:46.701841 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:46.701848 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:46.704289 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:46.704359 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:46.704382 1112716 round_trippers.go:580]     Audit-Id: 55b8f14d-af1b-45d6-9891-4fabf547a63d
	I1002 21:46:46.704470 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:46.704485 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:46.704492 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:46.704498 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:46.704504 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:46 GMT
	I1002 21:46:46.704615 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:47.202096 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:47.202118 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:47.202128 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:47.202135 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:47.204586 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:47.204611 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:47.204622 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:47.204629 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:47.204638 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:47.204647 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:47.204654 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:47 GMT
	I1002 21:46:47.204667 1112716 round_trippers.go:580]     Audit-Id: 47ba95bb-de7e-4a23-a191-f8b61746acbc
	I1002 21:46:47.204943 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:47.702459 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:47.702486 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:47.702495 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:47.702503 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:47.704993 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:47.705015 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:47.705023 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:47.705030 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:47.705037 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:47.705043 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:47 GMT
	I1002 21:46:47.705049 1112716 round_trippers.go:580]     Audit-Id: bf161243-d7e5-45ac-bcf7-9ddb375a0e68
	I1002 21:46:47.705055 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:47.705188 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:48.201835 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:48.201860 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:48.201869 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:48.201877 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:48.204595 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:48.204616 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:48.204625 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:48.204632 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:48.204638 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:48 GMT
	I1002 21:46:48.204645 1112716 round_trippers.go:580]     Audit-Id: 93b0af82-19ba-45de-a4d3-3d749e54f287
	I1002 21:46:48.204651 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:48.204657 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:48.204807 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"467","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1002 21:46:48.702462 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:48.702483 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:48.702493 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:48.702500 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:48.704943 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:48.704962 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:48.704970 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:48.704976 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:48.704982 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:48.704988 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:48.704995 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:48 GMT
	I1002 21:46:48.705001 1112716 round_trippers.go:580]     Audit-Id: 38f50597-b69b-4ddf-909f-a634a23c8136
	I1002 21:46:48.705236 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:48.705612 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:49.202295 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:49.202316 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:49.202327 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:49.202334 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:49.204819 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:49.204839 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:49.204847 1112716 round_trippers.go:580]     Audit-Id: 03bb2e00-ec33-4b4e-a551-3a3a518e02ba
	I1002 21:46:49.204855 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:49.204861 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:49.204867 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:49.204873 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:49.204879 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:49 GMT
	I1002 21:46:49.205023 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:49.701765 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:49.701790 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:49.701800 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:49.701807 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:49.704324 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:49.704348 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:49.704358 1112716 round_trippers.go:580]     Audit-Id: db412017-0379-4a09-8440-404d1989dc3c
	I1002 21:46:49.704364 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:49.704370 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:49.704376 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:49.704382 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:49.704389 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:49 GMT
	I1002 21:46:49.704599 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:50.202506 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:50.202533 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:50.202543 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:50.202550 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:50.205039 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:50.205060 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:50.205068 1112716 round_trippers.go:580]     Audit-Id: 6a746b1c-6ccd-4047-aadb-21afe0898833
	I1002 21:46:50.205075 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:50.205081 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:50.205088 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:50.205094 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:50.205109 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:50 GMT
	I1002 21:46:50.205382 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:50.702468 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:50.702490 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:50.702503 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:50.702511 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:50.704911 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:50.704930 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:50.704939 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:50.704947 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:50.704953 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:50.704959 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:50 GMT
	I1002 21:46:50.704965 1112716 round_trippers.go:580]     Audit-Id: 2662c006-50d9-4e68-b060-ba3803d5d3ac
	I1002 21:46:50.704972 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:50.705096 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:51.202489 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:51.202509 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:51.202539 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:51.202547 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:51.205240 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:51.205261 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:51.205269 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:51.205276 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:51.205282 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:51 GMT
	I1002 21:46:51.205288 1112716 round_trippers.go:580]     Audit-Id: 0e3ae12c-b661-4e63-8cd6-1d9dd4b30717
	I1002 21:46:51.205295 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:51.205301 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:51.205377 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:51.205780 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:51.702413 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:51.702439 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:51.702448 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:51.702455 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:51.705797 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:51.705819 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:51.705828 1112716 round_trippers.go:580]     Audit-Id: afc0c0e1-5a90-492f-9833-80b6ba34aa58
	I1002 21:46:51.705835 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:51.705841 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:51.705847 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:51.705853 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:51.705860 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:51 GMT
	I1002 21:46:51.706157 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:52.202462 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:52.202488 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:52.202499 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:52.202506 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:52.206202 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:52.206222 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:52.206231 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:52.206237 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:52.206243 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:52.206250 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:52 GMT
	I1002 21:46:52.206256 1112716 round_trippers.go:580]     Audit-Id: 3e5ec4f1-f551-4053-a4bb-0f95e097ba74
	I1002 21:46:52.206263 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:52.206459 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:52.701758 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:52.701784 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:52.701794 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:52.701802 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:52.704357 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:52.704375 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:52.704383 1112716 round_trippers.go:580]     Audit-Id: d3ef292f-c87b-4c82-9d05-ba9ca933ab41
	I1002 21:46:52.704390 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:52.704396 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:52.704402 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:52.704408 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:52.704414 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:52 GMT
	I1002 21:46:52.704556 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:53.202459 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:53.202491 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:53.202502 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:53.202509 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:53.204993 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:53.205015 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:53.205024 1112716 round_trippers.go:580]     Audit-Id: b3e5cab9-81a2-4b2e-b1bd-c45aba465e83
	I1002 21:46:53.205031 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:53.205037 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:53.205044 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:53.205050 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:53.205057 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:53 GMT
	I1002 21:46:53.205242 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:53.702092 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:53.702116 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:53.702127 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:53.702135 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:53.704870 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:53.704890 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:53.704898 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:53 GMT
	I1002 21:46:53.704905 1112716 round_trippers.go:580]     Audit-Id: 58d7aaed-27e5-4f65-a8f7-6e99752ef097
	I1002 21:46:53.704911 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:53.704917 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:53.704923 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:53.704929 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:53.705034 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:53.705425 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:54.202444 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:54.202469 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:54.202480 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:54.202487 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:54.204984 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:54.205006 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:54.205014 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:54 GMT
	I1002 21:46:54.205021 1112716 round_trippers.go:580]     Audit-Id: bc16440c-c952-4fb4-9651-b60b83f45a16
	I1002 21:46:54.205027 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:54.205033 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:54.205039 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:54.205045 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:54.205284 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:54.702349 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:54.702381 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:54.702391 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:54.702402 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:54.704981 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:54.705001 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:54.705009 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:54.705015 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:54.705022 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:54.705028 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:54 GMT
	I1002 21:46:54.705034 1112716 round_trippers.go:580]     Audit-Id: 134c728c-c80e-44b5-a1f9-e286fa01739f
	I1002 21:46:54.705041 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:54.705180 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:55.202281 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:55.202305 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:55.202315 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:55.202322 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:55.205086 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:55.205113 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:55.205127 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:55.205133 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:55.205140 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:55.205146 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:55.205153 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:55 GMT
	I1002 21:46:55.205167 1112716 round_trippers.go:580]     Audit-Id: 74773d84-cb6d-4290-8024-0a5675c39d3e
	I1002 21:46:55.205753 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:55.701833 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:55.701857 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:55.701867 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:55.701874 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:55.704382 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:55.704403 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:55.704412 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:55.704418 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:55.704424 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:55.704430 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:55.704440 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:55 GMT
	I1002 21:46:55.704455 1112716 round_trippers.go:580]     Audit-Id: e103aee9-0eaa-43c1-ae8f-28b779c04d52
	I1002 21:46:55.704678 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:56.201718 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:56.201742 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:56.201752 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:56.201762 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:56.204922 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:46:56.204950 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:56.204958 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:56.204965 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:56 GMT
	I1002 21:46:56.204972 1112716 round_trippers.go:580]     Audit-Id: cc6d6869-1c71-4d18-998c-50df6da53264
	I1002 21:46:56.204978 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:56.204984 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:56.204990 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:56.205516 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:56.205914 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:56.702378 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:56.702402 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:56.702411 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:56.702418 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:56.704879 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:56.704905 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:56.704914 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:56.704921 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:56.704927 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:56.704933 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:56.704940 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:56 GMT
	I1002 21:46:56.704946 1112716 round_trippers.go:580]     Audit-Id: a60d65e1-0346-4481-b445-909637d3e6ed
	I1002 21:46:56.705089 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:57.201863 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:57.201906 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:57.201915 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:57.201922 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:57.204554 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:57.204575 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:57.204584 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:57.204590 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:57 GMT
	I1002 21:46:57.204597 1112716 round_trippers.go:580]     Audit-Id: 93029911-ade4-43d6-8bc2-a3f3eacb4879
	I1002 21:46:57.204603 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:57.204609 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:57.204616 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:57.204726 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:57.702723 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:57.702755 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:57.702765 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:57.702772 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:57.705357 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:57.705378 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:57.705387 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:57.705394 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:57.705400 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:57.705406 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:57.705412 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:57 GMT
	I1002 21:46:57.705418 1112716 round_trippers.go:580]     Audit-Id: 7e1ff821-c078-4439-a688-d47598eadeb4
	I1002 21:46:57.705602 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:58.202683 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:58.202706 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:58.202716 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:58.202724 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:58.205500 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:58.205527 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:58.205536 1112716 round_trippers.go:580]     Audit-Id: aa80b67e-f1e0-45ad-8333-cf0d580c2720
	I1002 21:46:58.205543 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:58.205552 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:58.205559 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:58.205566 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:58.205586 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:58 GMT
	I1002 21:46:58.205707 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:58.206152 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:46:58.702173 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:58.702199 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:58.702210 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:58.702217 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:58.704936 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:58.704971 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:58.704979 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:58 GMT
	I1002 21:46:58.704986 1112716 round_trippers.go:580]     Audit-Id: 0dfd649b-ed19-433a-af31-a388f817783a
	I1002 21:46:58.704992 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:58.704998 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:58.705004 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:58.705011 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:58.705270 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:59.201799 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:59.201825 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:59.201835 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:59.201842 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:59.204592 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:59.204616 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:59.204625 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:59 GMT
	I1002 21:46:59.204633 1112716 round_trippers.go:580]     Audit-Id: eeea6f26-9bae-447c-ad30-368963ab6f68
	I1002 21:46:59.204639 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:59.204645 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:59.204651 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:59.204657 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:59.205072 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:46:59.702408 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:46:59.702436 1112716 round_trippers.go:469] Request Headers:
	I1002 21:46:59.702446 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:46:59.702453 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:46:59.704964 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:46:59.704989 1112716 round_trippers.go:577] Response Headers:
	I1002 21:46:59.704999 1112716 round_trippers.go:580]     Audit-Id: 064ac514-ae34-4fff-884b-fdbce80770e1
	I1002 21:46:59.705006 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:46:59.705012 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:46:59.705021 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:46:59.705028 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:46:59.705035 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:46:59 GMT
	I1002 21:46:59.705183 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:00.202479 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:00.202510 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:00.202571 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:00.202580 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:00.205384 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:00.205422 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:00.205433 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:00 GMT
	I1002 21:47:00.205445 1112716 round_trippers.go:580]     Audit-Id: deccce5c-da7d-43cd-99eb-0080c689669d
	I1002 21:47:00.205453 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:00.205459 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:00.205465 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:00.205474 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:00.205598 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:00.702629 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:00.702656 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:00.702666 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:00.702673 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:00.705287 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:00.705310 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:00.705320 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:00.705327 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:00 GMT
	I1002 21:47:00.705334 1112716 round_trippers.go:580]     Audit-Id: 976880ca-33c6-471b-9536-f8d54d40f70e
	I1002 21:47:00.705340 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:00.705346 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:00.705353 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:00.705553 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:00.705935 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:47:01.202102 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:01.202129 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:01.202139 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:01.202147 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:01.205134 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:01.205162 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:01.205170 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:01 GMT
	I1002 21:47:01.205178 1112716 round_trippers.go:580]     Audit-Id: 181af9e3-79e2-4b0e-8f65-ebee27799316
	I1002 21:47:01.205186 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:01.205192 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:01.205198 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:01.205250 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:01.205530 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:01.701800 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:01.701823 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:01.701833 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:01.701841 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:01.704328 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:01.704354 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:01.704363 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:01.704370 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:01.704376 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:01.704382 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:01.704389 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:01 GMT
	I1002 21:47:01.704395 1112716 round_trippers.go:580]     Audit-Id: 5eb157dd-bc0a-4c37-99c9-0e7044654d3c
	I1002 21:47:01.704634 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:02.202336 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:02.202360 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:02.202371 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:02.202378 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:02.204893 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:02.204913 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:02.204921 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:02.204928 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:02.204935 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:02.204941 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:02.204947 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:02 GMT
	I1002 21:47:02.204954 1112716 round_trippers.go:580]     Audit-Id: 600d52c4-a659-42a7-93d1-a47bb73d0127
	I1002 21:47:02.205074 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:02.702317 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:02.702342 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:02.702352 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:02.702359 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:02.704839 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:02.704907 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:02.704922 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:02.704930 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:02.704937 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:02.704943 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:02.704953 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:02 GMT
	I1002 21:47:02.704960 1112716 round_trippers.go:580]     Audit-Id: 8883d70d-5b2a-4b6c-9aa4-fa320a47654e
	I1002 21:47:02.705458 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:03.202400 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:03.202425 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:03.202437 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:03.202444 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:03.205105 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:03.205127 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:03.205135 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:03 GMT
	I1002 21:47:03.205142 1112716 round_trippers.go:580]     Audit-Id: 22e1c499-794a-4974-8059-bd1258b39026
	I1002 21:47:03.205149 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:03.205155 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:03.205161 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:03.205167 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:03.205322 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:03.205696 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:47:03.702433 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:03.702459 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:03.702470 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:03.702477 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:03.705297 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:03.705320 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:03.705330 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:03.705336 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:03 GMT
	I1002 21:47:03.705343 1112716 round_trippers.go:580]     Audit-Id: bec983d5-a870-4410-9354-53a497da572f
	I1002 21:47:03.705350 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:03.705365 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:03.705371 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:03.705708 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:04.201769 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:04.201794 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:04.201804 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:04.201811 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:04.204229 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:04.204249 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:04.204257 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:04.204265 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:04 GMT
	I1002 21:47:04.204271 1112716 round_trippers.go:580]     Audit-Id: a4ae1a0f-d0ab-441f-a23b-ac960e88ff4a
	I1002 21:47:04.204277 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:04.204283 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:04.204289 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:04.204408 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:04.702341 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:04.702370 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:04.702380 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:04.702387 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:04.704908 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:04.704934 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:04.704942 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:04 GMT
	I1002 21:47:04.704949 1112716 round_trippers.go:580]     Audit-Id: efdddf0e-ca27-44a8-921d-b19029a13cd2
	I1002 21:47:04.704955 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:04.704961 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:04.704968 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:04.704978 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:04.705120 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:05.202206 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:05.202234 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:05.202245 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:05.202252 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:05.204956 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:05.204977 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:05.204985 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:05.204991 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:05.205000 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:05.205006 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:05 GMT
	I1002 21:47:05.205013 1112716 round_trippers.go:580]     Audit-Id: 27d4fd6d-2802-49a1-9bfe-2b16eb6bec58
	I1002 21:47:05.205018 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:05.205162 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:05.701930 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:05.701958 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:05.701968 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:05.701979 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:05.704477 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:05.704497 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:05.704506 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:05.704513 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:05 GMT
	I1002 21:47:05.704519 1112716 round_trippers.go:580]     Audit-Id: 3defbaed-609b-4ab9-ad21-6b25dc8fba97
	I1002 21:47:05.704525 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:05.704542 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:05.704549 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:05.704683 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:05.705047 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:47:06.202429 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:06.202449 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:06.202458 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:06.202465 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:06.204999 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:06.205028 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:06.205038 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:06.205046 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:06.205052 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:06 GMT
	I1002 21:47:06.205058 1112716 round_trippers.go:580]     Audit-Id: 47ec512f-7f36-4722-bf4e-188d1df2aafe
	I1002 21:47:06.205064 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:06.205070 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:06.205529 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:06.701825 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:06.701858 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:06.701878 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:06.701886 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:06.705361 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:47:06.705382 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:06.705390 1112716 round_trippers.go:580]     Audit-Id: c37fd901-9e4c-481f-b6f7-1ff18090c914
	I1002 21:47:06.705397 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:06.705404 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:06.705411 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:06.705417 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:06.705423 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:06 GMT
	I1002 21:47:06.705579 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:07.202458 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:07.202495 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:07.202505 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:07.202512 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:07.205347 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:07.205368 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:07.205376 1112716 round_trippers.go:580]     Audit-Id: 7d237254-2b3d-4976-81ec-af026753a40c
	I1002 21:47:07.205383 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:07.205390 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:07.205396 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:07.205402 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:07.205408 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:07 GMT
	I1002 21:47:07.205504 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:07.702788 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:07.702815 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:07.702824 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:07.702832 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:07.705298 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:07.705323 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:07.705332 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:07.705338 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:07.705344 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:07.705350 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:07.705357 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:07 GMT
	I1002 21:47:07.705365 1112716 round_trippers.go:580]     Audit-Id: 77d7adca-70a6-466f-afc6-c8bfe0971caf
	I1002 21:47:07.705489 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:07.705866 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:47:08.202461 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:08.202493 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:08.202508 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:08.202519 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:08.204960 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:08.204982 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:08.204991 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:08.204998 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:08.205004 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:08 GMT
	I1002 21:47:08.205010 1112716 round_trippers.go:580]     Audit-Id: d17bac85-6a41-491f-b8c1-fbcb00ec1b6e
	I1002 21:47:08.205016 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:08.205022 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:08.205126 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:08.702365 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:08.702390 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:08.702400 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:08.702408 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:08.705334 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:08.705361 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:08.705369 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:08.705376 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:08.705383 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:08 GMT
	I1002 21:47:08.705389 1112716 round_trippers.go:580]     Audit-Id: 9c954dd3-ed62-4955-ab5d-56f06116a421
	I1002 21:47:08.705395 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:08.705401 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:08.705524 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:09.202420 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:09.202442 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:09.202452 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:09.202460 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:09.205047 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:09.205070 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:09.205084 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:09.205092 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:09.205100 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:09.205112 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:09.205119 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:09 GMT
	I1002 21:47:09.205125 1112716 round_trippers.go:580]     Audit-Id: c0bc98e6-4145-4a01-8f39-c24f6999c85b
	I1002 21:47:09.205274 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:09.702481 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:09.702507 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:09.702518 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:09.702526 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:09.705748 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:47:09.705780 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:09.705789 1112716 round_trippers.go:580]     Audit-Id: 9567f56c-1b95-4080-9f21-80c50d9ff887
	I1002 21:47:09.705797 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:09.705804 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:09.705811 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:09.705820 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:09.705837 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:09 GMT
	I1002 21:47:09.706152 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"475","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1002 21:47:09.706594 1112716 node_ready.go:58] node "multinode-629060-m02" has status "Ready":"False"
	I1002 21:47:10.202536 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:10.202566 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.202575 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.202582 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.205076 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.205102 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.205112 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.205119 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.205125 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.205132 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.205138 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.205145 1112716 round_trippers.go:580]     Audit-Id: 62f53ad8-eed1-4db2-9fce-4bdf42cd963f
	I1002 21:47:10.205447 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"496","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1002 21:47:10.205823 1112716 node_ready.go:49] node "multinode-629060-m02" has status "Ready":"True"
	I1002 21:47:10.205841 1112716 node_ready.go:38] duration metric: took 31.010745057s waiting for node "multinode-629060-m02" to be "Ready" ...
	I1002 21:47:10.205858 1112716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:47:10.205923 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1002 21:47:10.205931 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.205940 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.205950 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.209677 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:47:10.209704 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.209713 1112716 round_trippers.go:580]     Audit-Id: 295ea88e-7fce-4c4f-81f7-196752104a2f
	I1002 21:47:10.209720 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.209726 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.209732 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.209738 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.209744 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.210148 1112716 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"408","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1002 21:47:10.213033 1112716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5vhnn" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.213122 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-5vhnn
	I1002 21:47:10.213133 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.213143 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.213154 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.215569 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.215594 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.215602 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.215609 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.215615 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.215621 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.215627 1112716 round_trippers.go:580]     Audit-Id: 7a42f397-0099-4d59-9e20-530b24064d2e
	I1002 21:47:10.215634 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.215741 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-5vhnn","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb","resourceVersion":"408","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"51a3f5a3-d4b8-4e22-a7ca-9d06ec207310\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1002 21:47:10.216254 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:10.216271 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.216279 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.216286 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.218482 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.218507 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.218517 1112716 round_trippers.go:580]     Audit-Id: 3a206632-6928-4a86-800d-426166bd52ae
	I1002 21:47:10.218524 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.218531 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.218540 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.218559 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.218565 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.218848 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:47:10.219254 1112716 pod_ready.go:92] pod "coredns-5dd5756b68-5vhnn" in "kube-system" namespace has status "Ready":"True"
	I1002 21:47:10.219272 1112716 pod_ready.go:81] duration metric: took 6.211174ms waiting for pod "coredns-5dd5756b68-5vhnn" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.219284 1112716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.219350 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-629060
	I1002 21:47:10.219360 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.219368 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.219379 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.221495 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.221524 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.221533 1112716 round_trippers.go:580]     Audit-Id: 98a2e1a0-648d-4522-99e5-342df9672b28
	I1002 21:47:10.221539 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.221545 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.221551 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.221560 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.221573 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.221681 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-629060","namespace":"kube-system","uid":"6bb8beb8-c1c5-4b2c-9a6e-1b00db71d13a","resourceVersion":"287","creationTimestamp":"2023-10-02T21:46:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ed64f9398b8edc929707995e6df5dc48","kubernetes.io/config.mirror":"ed64f9398b8edc929707995e6df5dc48","kubernetes.io/config.seen":"2023-10-02T21:46:05.978598999Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1002 21:47:10.222132 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:10.222148 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.222156 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.222163 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.224571 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.224594 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.224603 1112716 round_trippers.go:580]     Audit-Id: 5847988c-e409-44de-bc8b-39fdc9ec7d63
	I1002 21:47:10.224609 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.224616 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.224622 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.224628 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.224635 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.224779 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:47:10.225159 1112716 pod_ready.go:92] pod "etcd-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:47:10.225177 1112716 pod_ready.go:81] duration metric: took 5.881716ms waiting for pod "etcd-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.225194 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.225276 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-629060
	I1002 21:47:10.225289 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.225297 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.225303 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.227508 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.227532 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.227539 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.227546 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.227552 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.227558 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.227573 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.227583 1112716 round_trippers.go:580]     Audit-Id: adbbfb89-15d1-4c43-ac28-bccd9a3c523a
	I1002 21:47:10.227701 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-629060","namespace":"kube-system","uid":"6a9fbd26-ddd8-4dcc-9a48-217bfab74392","resourceVersion":"293","creationTimestamp":"2023-10-02T21:46:04Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"408ea7957b5ef07fae9dc9a9d3933e01","kubernetes.io/config.mirror":"408ea7957b5ef07fae9dc9a9d3933e01","kubernetes.io/config.seen":"2023-10-02T21:45:57.949256597Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1002 21:47:10.228210 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:10.228226 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.228234 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.228245 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.230464 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.230486 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.230496 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.230502 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.230508 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.230514 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.230521 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.230531 1112716 round_trippers.go:580]     Audit-Id: 9cffde1d-3ff7-4104-ac01-09e69a265916
	I1002 21:47:10.230645 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:47:10.231021 1112716 pod_ready.go:92] pod "kube-apiserver-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:47:10.231039 1112716 pod_ready.go:81] duration metric: took 5.837243ms waiting for pod "kube-apiserver-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.231051 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.231108 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-629060
	I1002 21:47:10.231117 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.231125 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.231133 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.233447 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.233501 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.233522 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.233545 1112716 round_trippers.go:580]     Audit-Id: ded344ea-6f29-42d7-8fb6-569ab61f4a64
	I1002 21:47:10.233560 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.233577 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.233584 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.233591 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.233705 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-629060","namespace":"kube-system","uid":"7477711d-6adc-4851-994e-3d41d599f050","resourceVersion":"289","creationTimestamp":"2023-10-02T21:46:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"221d35ba8a34143028534e3bbeb90aec","kubernetes.io/config.mirror":"221d35ba8a34143028534e3bbeb90aec","kubernetes.io/config.seen":"2023-10-02T21:46:05.978610239Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1002 21:47:10.234198 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:10.234213 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.234222 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.234229 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.236903 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.236956 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.236996 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.237005 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.237024 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.237037 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.237044 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.237051 1112716 round_trippers.go:580]     Audit-Id: b805c515-0c4f-4639-8fb5-590391610212
	I1002 21:47:10.237233 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:47:10.237651 1112716 pod_ready.go:92] pod "kube-controller-manager-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:47:10.237670 1112716 pod_ready.go:81] duration metric: took 6.609253ms waiting for pod "kube-controller-manager-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.237700 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9slzp" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.403081 1112716 request.go:629] Waited for 165.317087ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9slzp
	I1002 21:47:10.403205 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9slzp
	I1002 21:47:10.403215 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.403228 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.403248 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.406223 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.406281 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.406312 1112716 round_trippers.go:580]     Audit-Id: b0c647b8-6f21-4345-a578-f5b15f487122
	I1002 21:47:10.406332 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.406360 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.406379 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.406392 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.406400 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.406532 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9slzp","generateName":"kube-proxy-","namespace":"kube-system","uid":"053392fd-91ec-4cc0-98c3-d35660bbe40b","resourceVersion":"383","creationTimestamp":"2023-10-02T21:46:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1af0d895-df42-437e-b5ac-d12205e17520","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1af0d895-df42-437e-b5ac-d12205e17520\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1002 21:47:10.603411 1112716 request.go:629] Waited for 196.350395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:10.603492 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:10.603502 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.603519 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.603528 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.606196 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.606229 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.606237 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.606244 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.606251 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.606258 1112716 round_trippers.go:580]     Audit-Id: 68078dd6-e2af-4316-97c2-778fcfa201f9
	I1002 21:47:10.606264 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.606271 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.606390 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:47:10.606796 1112716 pod_ready.go:92] pod "kube-proxy-9slzp" in "kube-system" namespace has status "Ready":"True"
	I1002 21:47:10.606813 1112716 pod_ready.go:81] duration metric: took 369.10545ms waiting for pod "kube-proxy-9slzp" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.606825 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pr7ck" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:10.803219 1112716 request.go:629] Waited for 196.326092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7ck
	I1002 21:47:10.803283 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pr7ck
	I1002 21:47:10.803295 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:10.803305 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:10.803314 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:10.805884 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:10.805908 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:10.805917 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:10.805923 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:10.805929 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:10.805935 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:10.805942 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:10 GMT
	I1002 21:47:10.805959 1112716 round_trippers.go:580]     Audit-Id: 1fe10822-dde5-4a7b-bbd5-2954e8e0aac0
	I1002 21:47:10.806084 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pr7ck","generateName":"kube-proxy-","namespace":"kube-system","uid":"bee2c1e9-2c75-46ba-abd8-47f89a406ee3","resourceVersion":"460","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1af0d895-df42-437e-b5ac-d12205e17520","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1af0d895-df42-437e-b5ac-d12205e17520\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1002 21:47:11.003009 1112716 request.go:629] Waited for 196.427679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:11.003080 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060-m02
	I1002 21:47:11.003086 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:11.003095 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:11.003102 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:11.006245 1112716 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1002 21:47:11.006273 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:11.006283 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:11.006290 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:11.006297 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:11.006304 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:11 GMT
	I1002 21:47:11.006310 1112716 round_trippers.go:580]     Audit-Id: 9b873611-abf9-4878-8d50-a05c355b030e
	I1002 21:47:11.006317 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:11.006448 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060-m02","uid":"4f343d84-f1b5-4fb2-bde6-d71c7dc6f67b","resourceVersion":"496","creationTimestamp":"2023-10-02T21:46:38Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1002 21:47:11.006873 1112716 pod_ready.go:92] pod "kube-proxy-pr7ck" in "kube-system" namespace has status "Ready":"True"
	I1002 21:47:11.006891 1112716 pod_ready.go:81] duration metric: took 400.056108ms waiting for pod "kube-proxy-pr7ck" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:11.006903 1112716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:11.203158 1112716 request.go:629] Waited for 196.166125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629060
	I1002 21:47:11.203225 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-629060
	I1002 21:47:11.203236 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:11.203247 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:11.203260 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:11.205803 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:11.205827 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:11.205844 1112716 round_trippers.go:580]     Audit-Id: 7768c6ed-4130-41aa-b7c4-a93bc8fb0053
	I1002 21:47:11.205851 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:11.205857 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:11.205863 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:11.205871 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:11.205877 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:11 GMT
	I1002 21:47:11.206013 1112716 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-629060","namespace":"kube-system","uid":"7f387fbf-48ab-4405-bfc8-4141f1f993e4","resourceVersion":"294","creationTimestamp":"2023-10-02T21:46:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"41692589b49939f3e56032494ee733e3","kubernetes.io/config.mirror":"41692589b49939f3e56032494ee733e3","kubernetes.io/config.seen":"2023-10-02T21:46:05.978611372Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-02T21:46:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1002 21:47:11.402711 1112716 request.go:629] Waited for 196.261206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:11.402774 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-629060
	I1002 21:47:11.402780 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:11.402788 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:11.402796 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:11.405275 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:11.405333 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:11.405370 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:11.405394 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:11.405415 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:11.405431 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:11 GMT
	I1002 21:47:11.405437 1112716 round_trippers.go:580]     Audit-Id: 28cac62e-7c38-41ed-a7bb-cac11cae6326
	I1002 21:47:11.405444 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:11.405559 1112716 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-02T21:46:02Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1002 21:47:11.405967 1112716 pod_ready.go:92] pod "kube-scheduler-multinode-629060" in "kube-system" namespace has status "Ready":"True"
	I1002 21:47:11.405983 1112716 pod_ready.go:81] duration metric: took 399.073162ms waiting for pod "kube-scheduler-multinode-629060" in "kube-system" namespace to be "Ready" ...
	I1002 21:47:11.405994 1112716 pod_ready.go:38] duration metric: took 1.200121989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 21:47:11.406010 1112716 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 21:47:11.406069 1112716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:47:11.419919 1112716 system_svc.go:56] duration metric: took 13.899488ms WaitForService to wait for kubelet.
	I1002 21:47:11.419995 1112716 kubeadm.go:581] duration metric: took 32.246071061s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 21:47:11.420026 1112716 node_conditions.go:102] verifying NodePressure condition ...
	I1002 21:47:11.603530 1112716 request.go:629] Waited for 183.358292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1002 21:47:11.603614 1112716 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1002 21:47:11.603627 1112716 round_trippers.go:469] Request Headers:
	I1002 21:47:11.603637 1112716 round_trippers.go:473]     Accept: application/json, */*
	I1002 21:47:11.603644 1112716 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1002 21:47:11.606328 1112716 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1002 21:47:11.606353 1112716 round_trippers.go:577] Response Headers:
	I1002 21:47:11.606362 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d276c1a0-77a9-4f6a-b1b6-ff1e8d9218c8
	I1002 21:47:11.606369 1112716 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f7b8ae07-829f-4be6-a552-2e66ca48088f
	I1002 21:47:11.606375 1112716 round_trippers.go:580]     Date: Mon, 02 Oct 2023 21:47:11 GMT
	I1002 21:47:11.606402 1112716 round_trippers.go:580]     Audit-Id: c9761260-0827-43ca-a717-bd4e380d687d
	I1002 21:47:11.606417 1112716 round_trippers.go:580]     Cache-Control: no-cache, private
	I1002 21:47:11.606423 1112716 round_trippers.go:580]     Content-Type: application/json
	I1002 21:47:11.606615 1112716 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"497"},"items":[{"metadata":{"name":"multinode-629060","uid":"52cd4bba-7819-44ae-aba6-511a301524f1","resourceVersion":"389","creationTimestamp":"2023-10-02T21:46:02Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-629060","kubernetes.io/os":"linux","minikube.k8s.io/commit":"02d3b4696241894a75ebcb6562f5842e65de7b86","minikube.k8s.io/name":"multinode-629060","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_02T21_46_07_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1002 21:47:11.607280 1112716 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:47:11.607300 1112716 node_conditions.go:123] node cpu capacity is 2
	I1002 21:47:11.607311 1112716 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 21:47:11.607320 1112716 node_conditions.go:123] node cpu capacity is 2
	I1002 21:47:11.607330 1112716 node_conditions.go:105] duration metric: took 187.298937ms to run NodePressure ...
	I1002 21:47:11.607341 1112716 start.go:228] waiting for startup goroutines ...
	I1002 21:47:11.607368 1112716 start.go:242] writing updated cluster config ...
	I1002 21:47:11.607685 1112716 ssh_runner.go:195] Run: rm -f paused
	I1002 21:47:11.669571 1112716 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 21:47:11.673230 1112716 out.go:177] * Done! kubectl is now configured to use "multinode-629060" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 21:46:22 multinode-629060 crio[899]: time="2023-10-02 21:46:22.076689697Z" level=info msg="Starting container: 710360882e89f923c879705ef30f44b817dd73af867f9ac70a1d9d419cd2aaff" id=b1020da0-3e22-490f-a307-7187cf18cbd8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:46:22 multinode-629060 crio[899]: time="2023-10-02 21:46:22.093611416Z" level=info msg="Started container" PID=1909 containerID=710360882e89f923c879705ef30f44b817dd73af867f9ac70a1d9d419cd2aaff description=kube-system/storage-provisioner/storage-provisioner id=b1020da0-3e22-490f-a307-7187cf18cbd8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=82719546970d2425d0613b37c63231404dedeb0720e1f1ecebfb659cab0203da
	Oct 02 21:46:22 multinode-629060 crio[899]: time="2023-10-02 21:46:22.134508724Z" level=info msg="Created container 66d5f36552848da66f13ae27e4955b88794cdbb736904757ce7e532383a372fe: kube-system/coredns-5dd5756b68-5vhnn/coredns" id=47dc10a5-8da2-426f-bbd3-f8c06afe58fd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:46:22 multinode-629060 crio[899]: time="2023-10-02 21:46:22.135270887Z" level=info msg="Starting container: 66d5f36552848da66f13ae27e4955b88794cdbb736904757ce7e532383a372fe" id=716e699c-e77f-4a0d-99d4-8651f69a2b8d name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:46:22 multinode-629060 crio[899]: time="2023-10-02 21:46:22.147629105Z" level=info msg="Started container" PID=1938 containerID=66d5f36552848da66f13ae27e4955b88794cdbb736904757ce7e532383a372fe description=kube-system/coredns-5dd5756b68-5vhnn/coredns id=716e699c-e77f-4a0d-99d4-8651f69a2b8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=585fafb1c73259a7e4cfc182f4545e5f8332a944f7a4be74b3f286ff0d1a6bb0
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.881453052Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-wcgsg/POD" id=0dddf7f8-dfa6-412b-933e-48ca30ecf7ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.881511800Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.902118312Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-wcgsg Namespace:default ID:2371165fecc65970b28ae0b22b1ae4c23890196f9f6bc723dcfabd96609f09c6 UID:50014c06-b219-42ad-a3b6-7b307da03265 NetNS:/var/run/netns/7b3ed681-c622-436c-a2cf-8e689536f370 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.902160068Z" level=info msg="Adding pod default_busybox-5bc68d56bd-wcgsg to CNI network \"kindnet\" (type=ptp)"
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.929906292Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-wcgsg Namespace:default ID:2371165fecc65970b28ae0b22b1ae4c23890196f9f6bc723dcfabd96609f09c6 UID:50014c06-b219-42ad-a3b6-7b307da03265 NetNS:/var/run/netns/7b3ed681-c622-436c-a2cf-8e689536f370 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.930078164Z" level=info msg="Checking pod default_busybox-5bc68d56bd-wcgsg for CNI network kindnet (type=ptp)"
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.952402651Z" level=info msg="Ran pod sandbox 2371165fecc65970b28ae0b22b1ae4c23890196f9f6bc723dcfabd96609f09c6 with infra container: default/busybox-5bc68d56bd-wcgsg/POD" id=0dddf7f8-dfa6-412b-933e-48ca30ecf7ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.953522671Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=8afe378d-b91e-463c-968c-a522646da5bc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.953798600Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=8afe378d-b91e-463c-968c-a522646da5bc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.954731330Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=4edcde55-7b61-4b85-a365-b622a9c6b966 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:47:12 multinode-629060 crio[899]: time="2023-10-02 21:47:12.956198221Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 02 21:47:13 multinode-629060 crio[899]: time="2023-10-02 21:47:13.666196551Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.171500422Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=4edcde55-7b61-4b85-a365-b622a9c6b966 name=/runtime.v1.ImageService/PullImage
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.172441481Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d4e37c2d-ef7c-4ca6-aae6-e0770afe26ef name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.173183599Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d4e37c2d-ef7c-4ca6-aae6-e0770afe26ef name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.174326897Z" level=info msg="Creating container: default/busybox-5bc68d56bd-wcgsg/busybox" id=779b9db7-cd15-485c-8e53-86d05cda5428 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.174432743Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.279499674Z" level=info msg="Created container e340ee3cbe5a39b5d186de0930e6761176a8bc19e785e23589c7beed167e7138: default/busybox-5bc68d56bd-wcgsg/busybox" id=779b9db7-cd15-485c-8e53-86d05cda5428 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.280258785Z" level=info msg="Starting container: e340ee3cbe5a39b5d186de0930e6761176a8bc19e785e23589c7beed167e7138" id=775db884-0c2a-41aa-84cd-002ebb922cdb name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 21:47:15 multinode-629060 crio[899]: time="2023-10-02 21:47:15.290169874Z" level=info msg="Started container" PID=2065 containerID=e340ee3cbe5a39b5d186de0930e6761176a8bc19e785e23589c7beed167e7138 description=default/busybox-5bc68d56bd-wcgsg/busybox id=775db884-0c2a-41aa-84cd-002ebb922cdb name=/runtime.v1.RuntimeService/StartContainer sandboxID=2371165fecc65970b28ae0b22b1ae4c23890196f9f6bc723dcfabd96609f09c6
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e340ee3cbe5a3       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   2371165fecc65       busybox-5bc68d56bd-wcgsg
	66d5f36552848       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   585fafb1c7325       coredns-5dd5756b68-5vhnn
	710360882e89f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   82719546970d2       storage-provisioner
	5b30d7d286f18       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   a0a707381f97f       kindnet-v68mp
	e001090ed652a       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                      About a minute ago   Running             kube-proxy                0                   7d3f713c3c60a       kube-proxy-9slzp
	6a91dc91514ef       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                      About a minute ago   Running             kube-apiserver            0                   e4e0199365ac6       kube-apiserver-multinode-629060
	cf09eec1d96f2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   90a5e02c81de4       etcd-multinode-629060
	25da422b2f42c       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                      About a minute ago   Running             kube-controller-manager   0                   c01afbf641167       kube-controller-manager-multinode-629060
	f9d9fae0085ca       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                      About a minute ago   Running             kube-scheduler            0                   87752beba1e9b       kube-scheduler-multinode-629060
	
	* 
	* ==> coredns [66d5f36552848da66f13ae27e4955b88794cdbb736904757ce7e532383a372fe] <==
	* [INFO] 10.244.1.2:49145 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124644s
	[INFO] 10.244.0.3:35137 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109571s
	[INFO] 10.244.0.3:38170 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00103815s
	[INFO] 10.244.0.3:60400 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000075626s
	[INFO] 10.244.0.3:38937 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000048812s
	[INFO] 10.244.0.3:46023 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001031914s
	[INFO] 10.244.0.3:36389 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069571s
	[INFO] 10.244.0.3:45561 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006825s
	[INFO] 10.244.0.3:33641 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000065632s
	[INFO] 10.244.1.2:55140 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117915s
	[INFO] 10.244.1.2:48749 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062842s
	[INFO] 10.244.1.2:54819 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006409s
	[INFO] 10.244.1.2:34400 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00006226s
	[INFO] 10.244.0.3:41937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113312s
	[INFO] 10.244.0.3:34361 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060898s
	[INFO] 10.244.0.3:48970 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000068094s
	[INFO] 10.244.0.3:51529 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057731s
	[INFO] 10.244.1.2:39519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147962s
	[INFO] 10.244.1.2:41397 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141653s
	[INFO] 10.244.1.2:35080 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123733s
	[INFO] 10.244.1.2:49757 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000148545s
	[INFO] 10.244.0.3:52877 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112681s
	[INFO] 10.244.0.3:41583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000047622s
	[INFO] 10.244.0.3:54628 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000050634s
	[INFO] 10.244.0.3:48568 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000071885s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-629060
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-629060
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86
	                    minikube.k8s.io/name=multinode-629060
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T21_46_07_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 21:46:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629060
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 21:47:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 21:46:21 +0000   Mon, 02 Oct 2023 21:45:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 21:46:21 +0000   Mon, 02 Oct 2023 21:45:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 21:46:21 +0000   Mon, 02 Oct 2023 21:45:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 21:46:21 +0000   Mon, 02 Oct 2023 21:46:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-629060
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 18160f8128d949b08ced2a462890c65f
	  System UUID:                1291d85a-f77c-4afb-af06-bc63bb8e60ea
	  Boot ID:                    37d51973-0c20-4c15-81f3-7000eb353560
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wcgsg                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-5vhnn                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     62s
	  kube-system                 etcd-multinode-629060                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         74s
	  kube-system                 kindnet-v68mp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      62s
	  kube-system                 kube-apiserver-multinode-629060             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-multinode-629060    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-proxy-9slzp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-multinode-629060             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 60s                kube-proxy       
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x3 over 82s)  kubelet          Node multinode-629060 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x3 over 82s)  kubelet          Node multinode-629060 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x2 over 82s)  kubelet          Node multinode-629060 status is now: NodeHasSufficientPID
	  Normal  Starting                 75s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s                kubelet          Node multinode-629060 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s                kubelet          Node multinode-629060 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s                kubelet          Node multinode-629060 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           62s                node-controller  Node multinode-629060 event: Registered Node multinode-629060 in Controller
	  Normal  NodeReady                59s                kubelet          Node multinode-629060 status is now: NodeReady
	
	
	Name:               multinode-629060-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-629060-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 21:46:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-629060-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 21:47:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 21:47:09 +0000   Mon, 02 Oct 2023 21:46:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 21:47:09 +0000   Mon, 02 Oct 2023 21:46:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 21:47:09 +0000   Mon, 02 Oct 2023 21:46:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 21:47:09 +0000   Mon, 02 Oct 2023 21:47:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-629060-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 70701ae6aa8a40bb91ea7ed68e5730d1
	  System UUID:                1f4af7fb-34a6-4801-ae48-5b01b8e88bfc
	  Boot ID:                    37d51973-0c20-4c15-81f3-7000eb353560
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-rpjdg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-t7rlc               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-proxy-pr7ck            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  42s (x5 over 43s)  kubelet          Node multinode-629060-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x5 over 43s)  kubelet          Node multinode-629060-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x5 over 43s)  kubelet          Node multinode-629060-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node multinode-629060-m02 event: Registered Node multinode-629060-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-629060-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000729] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000b7a96011
	[  +0.001048] FS-Cache: N-key=[8] '7e613b0000000000'
	[  +0.003162] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000c6b3040d
	[  +0.001031] FS-Cache: O-key=[8] '7e613b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000165fee4f
	[  +0.001045] FS-Cache: N-key=[8] '7e613b0000000000'
	[Oct 2 21:34] FS-Cache: Duplicate cookie detected
	[  +0.000705] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=0000000092679c6a
	[  +0.001107] FS-Cache: O-key=[8] '7d613b0000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=000000007e0e0088
	[  +0.001044] FS-Cache: N-key=[8] '7d613b0000000000'
	[  +0.310553] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001087] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000e895d03e
	[  +0.001082] FS-Cache: O-key=[8] '83613b0000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000734ba06c
	[  +0.001060] FS-Cache: N-key=[8] '83613b0000000000'
	[  +1.089292] 9pnet: p9_fd_create_tcp (1073420): problem connecting socket to 192.168.49.1
	
	* 
	* ==> etcd [cf09eec1d96f2858455a045e2b685b1b6a00e6432fabc4ea6f79235a976e4a8b] <==
	* {"level":"info","ts":"2023-10-02T21:45:58.820284Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T21:45:58.825479Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T21:45:58.825721Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T21:45:58.82331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T21:45:58.826197Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-02T21:45:58.823973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-02T21:45:58.826504Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-02T21:45:59.277254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-02T21:45:59.277365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-02T21:45:59.277416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-02T21:45:59.277457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-02T21:45:59.277495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-02T21:45:59.277536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-02T21:45:59.27757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-02T21:45:59.281295Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:45:59.289464Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-629060 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T21:45:59.291557Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:45:59.291706Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:45:59.291758Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T21:45:59.291558Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T21:45:59.292858Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-02T21:45:59.291584Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T21:45:59.293823Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T21:45:59.291612Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T21:45:59.301291Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:47:21 up  4:29,  0 users,  load average: 1.26, 1.87, 2.08
	Linux multinode-629060 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5b30d7d286f18404c2ae305a4b815a741262a4d1cbe9774539b9601b87549ebe] <==
	* podIP = 192.168.58.2
	I1002 21:46:20.828637       1 main.go:116] setting mtu 1500 for CNI 
	I1002 21:46:20.828646       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 21:46:20.828657       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 21:46:21.333087       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 21:46:21.333124       1 main.go:227] handling current node
	I1002 21:46:31.439849       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 21:46:31.440080       1 main.go:227] handling current node
	I1002 21:46:41.449633       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 21:46:41.449660       1 main.go:227] handling current node
	I1002 21:46:41.449670       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 21:46:41.449676       1 main.go:250] Node multinode-629060-m02 has CIDR [10.244.1.0/24] 
	I1002 21:46:41.449838       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1002 21:46:51.455309       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 21:46:51.455339       1 main.go:227] handling current node
	I1002 21:46:51.455360       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 21:46:51.455366       1 main.go:250] Node multinode-629060-m02 has CIDR [10.244.1.0/24] 
	I1002 21:47:01.468623       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 21:47:01.468654       1 main.go:227] handling current node
	I1002 21:47:01.468673       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 21:47:01.468679       1 main.go:250] Node multinode-629060-m02 has CIDR [10.244.1.0/24] 
	I1002 21:47:11.473383       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1002 21:47:11.473412       1 main.go:227] handling current node
	I1002 21:47:11.473423       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1002 21:47:11.473429       1 main.go:250] Node multinode-629060-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [6a91dc91514ef1c56dc60de2ecfc5b23d0feda183302e5ec965af2d512c960c0] <==
	* I1002 21:46:02.984347       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 21:46:02.984360       1 cache.go:39] Caches are synced for autoregister controller
	I1002 21:46:03.010223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1002 21:46:03.010265       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1002 21:46:03.010274       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1002 21:46:03.010379       1 shared_informer.go:318] Caches are synced for configmaps
	I1002 21:46:03.014256       1 controller.go:624] quota admission added evaluator for: namespaces
	E1002 21:46:03.023051       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1002 21:46:03.226527       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 21:46:03.688754       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 21:46:03.693854       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 21:46:03.693881       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 21:46:04.247224       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 21:46:04.298616       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 21:46:04.348943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 21:46:04.356530       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1002 21:46:04.357689       1 controller.go:624] quota admission added evaluator for: endpoints
	I1002 21:46:04.364977       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 21:46:04.848617       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 21:46:05.879040       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 21:46:05.898619       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 21:46:05.915944       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 21:46:18.360670       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1002 21:46:18.406827       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1002 21:47:16.274055       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400bc96bd0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400bb9d3b0), ResponseWriter:(*httpsnoop.rw)(0x400bb9d3b0), Flusher:(*httpsnoop.rw)(0x400bb9d3b0), CloseNotifier:(*httpsnoop.rw)(0x400bb9d3b0), Pusher:(*httpsnoop.rw)(0x400bb9d3b0)}}, encoder:(*versioning.codec)(0x400bb6b680), memAllocator:(*runtime.Allocator)(0x400bbd9fe0)})
	
	* 
	* ==> kube-controller-manager [25da422b2f42cce62859ad7e2cecc954d8064a316fe47d540f3308e8d321d95e] <==
	* I1002 21:46:21.640339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.258µs"
	I1002 21:46:21.656296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.132µs"
	I1002 21:46:23.160740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.598µs"
	I1002 21:46:23.187947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.070752ms"
	I1002 21:46:23.188022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.486µs"
	I1002 21:46:23.244126       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1002 21:46:38.514757       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-629060-m02\" does not exist"
	I1002 21:46:38.525371       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-629060-m02" podCIDRs=["10.244.1.0/24"]
	I1002 21:46:38.537891       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t7rlc"
	I1002 21:46:38.538655       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pr7ck"
	I1002 21:46:43.246881       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-629060-m02"
	I1002 21:46:43.247019       1 event.go:307] "Event occurred" object="multinode-629060-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-629060-m02 event: Registered Node multinode-629060-m02 in Controller"
	I1002 21:47:09.875695       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-629060-m02"
	I1002 21:47:12.520291       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1002 21:47:12.536005       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-rpjdg"
	I1002 21:47:12.561919       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wcgsg"
	I1002 21:47:12.593658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="73.565301ms"
	I1002 21:47:12.601588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="7.787283ms"
	I1002 21:47:12.601742       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.543µs"
	I1002 21:47:12.614713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="41.264µs"
	I1002 21:47:13.262458       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-rpjdg" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-rpjdg"
	I1002 21:47:15.176000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.39407ms"
	I1002 21:47:15.176735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="77.136µs"
	I1002 21:47:16.266464       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.976984ms"
	I1002 21:47:16.266822       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.692µs"
	
	* 
	* ==> kube-proxy [e001090ed652aec4db367440ab85814112b1e6b3e3748a28daf2bd4c6255ea36] <==
	* I1002 21:46:20.817284       1 server_others.go:69] "Using iptables proxy"
	I1002 21:46:20.836041       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1002 21:46:20.862116       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 21:46:20.864578       1 server_others.go:152] "Using iptables Proxier"
	I1002 21:46:20.864684       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 21:46:20.864728       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 21:46:20.864802       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 21:46:20.865106       1 server.go:846] "Version info" version="v1.28.2"
	I1002 21:46:20.866168       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 21:46:20.867467       1 config.go:188] "Starting service config controller"
	I1002 21:46:20.867546       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 21:46:20.867593       1 config.go:97] "Starting endpoint slice config controller"
	I1002 21:46:20.867620       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 21:46:20.869531       1 config.go:315] "Starting node config controller"
	I1002 21:46:20.870420       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 21:46:20.968291       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 21:46:20.968294       1 shared_informer.go:318] Caches are synced for service config
	I1002 21:46:20.970679       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f9d9fae0085ca76d2a1a3582544e3011feba72c0511582753e17c9097215b15a] <==
	* W1002 21:46:03.258594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 21:46:03.259215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 21:46:03.258694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 21:46:03.259342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 21:46:03.258773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 21:46:03.259408       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1002 21:46:03.258832       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 21:46:03.259467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 21:46:03.258863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 21:46:03.259531       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 21:46:03.258944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 21:46:03.259605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1002 21:46:03.258980       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 21:46:03.259668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 21:46:03.259032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 21:46:03.259729       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 21:46:03.259094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 21:46:03.259789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1002 21:46:03.259128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 21:46:03.259849       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1002 21:46:03.259176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 21:46:03.259908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 21:46:04.212858       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1002 21:46:04.212989       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1002 21:46:06.946477       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 02 21:46:18 multinode-629060 kubelet[1390]: I1002 21:46:18.568168    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/053392fd-91ec-4cc0-98c3-d35660bbe40b-xtables-lock\") pod \"kube-proxy-9slzp\" (UID: \"053392fd-91ec-4cc0-98c3-d35660bbe40b\") " pod="kube-system/kube-proxy-9slzp"
	Oct 02 21:46:18 multinode-629060 kubelet[1390]: I1002 21:46:18.568195    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c073b51c-b148-4045-af7a-2af9e00ab1cf-cni-cfg\") pod \"kindnet-v68mp\" (UID: \"c073b51c-b148-4045-af7a-2af9e00ab1cf\") " pod="kube-system/kindnet-v68mp"
	Oct 02 21:46:18 multinode-629060 kubelet[1390]: I1002 21:46:18.568218    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c073b51c-b148-4045-af7a-2af9e00ab1cf-xtables-lock\") pod \"kindnet-v68mp\" (UID: \"c073b51c-b148-4045-af7a-2af9e00ab1cf\") " pod="kube-system/kindnet-v68mp"
	Oct 02 21:46:19 multinode-629060 kubelet[1390]: E1002 21:46:19.802770    1390 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:46:19 multinode-629060 kubelet[1390]: E1002 21:46:19.802835    1390 projected.go:198] Error preparing data for projected volume kube-api-access-nk7m7 for pod kube-system/kube-proxy-9slzp: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:46:19 multinode-629060 kubelet[1390]: E1002 21:46:19.802934    1390 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/053392fd-91ec-4cc0-98c3-d35660bbe40b-kube-api-access-nk7m7 podName:053392fd-91ec-4cc0-98c3-d35660bbe40b nodeName:}" failed. No retries permitted until 2023-10-02 21:46:20.302908215 +0000 UTC m=+14.454185588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nk7m7" (UniqueName: "kubernetes.io/projected/053392fd-91ec-4cc0-98c3-d35660bbe40b-kube-api-access-nk7m7") pod "kube-proxy-9slzp" (UID: "053392fd-91ec-4cc0-98c3-d35660bbe40b") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:46:19 multinode-629060 kubelet[1390]: E1002 21:46:19.821568    1390 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:46:19 multinode-629060 kubelet[1390]: E1002 21:46:19.821614    1390 projected.go:198] Error preparing data for projected volume kube-api-access-rqbz9 for pod kube-system/kindnet-v68mp: failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:46:19 multinode-629060 kubelet[1390]: E1002 21:46:19.821696    1390 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c073b51c-b148-4045-af7a-2af9e00ab1cf-kube-api-access-rqbz9 podName:c073b51c-b148-4045-af7a-2af9e00ab1cf nodeName:}" failed. No retries permitted until 2023-10-02 21:46:20.321673529 +0000 UTC m=+14.472950901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rqbz9" (UniqueName: "kubernetes.io/projected/c073b51c-b148-4045-af7a-2af9e00ab1cf-kube-api-access-rqbz9") pod "kindnet-v68mp" (UID: "c073b51c-b148-4045-af7a-2af9e00ab1cf") : failed to sync configmap cache: timed out waiting for the condition
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.167702    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-v68mp" podStartSLOduration=3.167654127 podCreationTimestamp="2023-10-02 21:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 21:46:21.166817068 +0000 UTC m=+15.318094457" watchObservedRunningTime="2023-10-02 21:46:21.167654127 +0000 UTC m=+15.318931508"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.167810    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9slzp" podStartSLOduration=3.167791333 podCreationTimestamp="2023-10-02 21:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 21:46:21.154050105 +0000 UTC m=+15.305327478" watchObservedRunningTime="2023-10-02 21:46:21.167791333 +0000 UTC m=+15.319068738"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.611395    1390 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.639109    1390 topology_manager.go:215] "Topology Admit Handler" podUID="a90c4a73-8d8d-4bec-832b-c009f3c3bcbb" podNamespace="kube-system" podName="coredns-5dd5756b68-5vhnn"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.641124    1390 topology_manager.go:215] "Topology Admit Handler" podUID="9880c22d-cac3-49f1-b888-048e6bb56999" podNamespace="kube-system" podName="storage-provisioner"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.799357    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsmx2\" (UniqueName: \"kubernetes.io/projected/9880c22d-cac3-49f1-b888-048e6bb56999-kube-api-access-dsmx2\") pod \"storage-provisioner\" (UID: \"9880c22d-cac3-49f1-b888-048e6bb56999\") " pod="kube-system/storage-provisioner"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.799417    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9880c22d-cac3-49f1-b888-048e6bb56999-tmp\") pod \"storage-provisioner\" (UID: \"9880c22d-cac3-49f1-b888-048e6bb56999\") " pod="kube-system/storage-provisioner"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.799447    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a90c4a73-8d8d-4bec-832b-c009f3c3bcbb-config-volume\") pod \"coredns-5dd5756b68-5vhnn\" (UID: \"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb\") " pod="kube-system/coredns-5dd5756b68-5vhnn"
	Oct 02 21:46:21 multinode-629060 kubelet[1390]: I1002 21:46:21.799477    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crjsp\" (UniqueName: \"kubernetes.io/projected/a90c4a73-8d8d-4bec-832b-c009f3c3bcbb-kube-api-access-crjsp\") pod \"coredns-5dd5756b68-5vhnn\" (UID: \"a90c4a73-8d8d-4bec-832b-c009f3c3bcbb\") " pod="kube-system/coredns-5dd5756b68-5vhnn"
	Oct 02 21:46:22 multinode-629060 kubelet[1390]: W1002 21:46:22.009997    1390 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/crio-585fafb1c73259a7e4cfc182f4545e5f8332a944f7a4be74b3f286ff0d1a6bb0 WatchSource:0}: Error finding container 585fafb1c73259a7e4cfc182f4545e5f8332a944f7a4be74b3f286ff0d1a6bb0: Status 404 returned error can't find the container with id 585fafb1c73259a7e4cfc182f4545e5f8332a944f7a4be74b3f286ff0d1a6bb0
	Oct 02 21:46:23 multinode-629060 kubelet[1390]: I1002 21:46:23.158278    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.158237031 podCreationTimestamp="2023-10-02 21:46:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 21:46:22.175968729 +0000 UTC m=+16.327246127" watchObservedRunningTime="2023-10-02 21:46:23.158237031 +0000 UTC m=+17.309514404"
	Oct 02 21:46:23 multinode-629060 kubelet[1390]: I1002 21:46:23.173759    1390 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5vhnn" podStartSLOduration=5.173716893 podCreationTimestamp="2023-10-02 21:46:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 21:46:23.158735261 +0000 UTC m=+17.310012634" watchObservedRunningTime="2023-10-02 21:46:23.173716893 +0000 UTC m=+17.324994266"
	Oct 02 21:47:12 multinode-629060 kubelet[1390]: I1002 21:47:12.579138    1390 topology_manager.go:215] "Topology Admit Handler" podUID="50014c06-b219-42ad-a3b6-7b307da03265" podNamespace="default" podName="busybox-5bc68d56bd-wcgsg"
	Oct 02 21:47:12 multinode-629060 kubelet[1390]: I1002 21:47:12.694514    1390 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgpsn\" (UniqueName: \"kubernetes.io/projected/50014c06-b219-42ad-a3b6-7b307da03265-kube-api-access-sgpsn\") pod \"busybox-5bc68d56bd-wcgsg\" (UID: \"50014c06-b219-42ad-a3b6-7b307da03265\") " pod="default/busybox-5bc68d56bd-wcgsg"
	Oct 02 21:47:12 multinode-629060 kubelet[1390]: W1002 21:47:12.950625    1390 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/crio-2371165fecc65970b28ae0b22b1ae4c23890196f9f6bc723dcfabd96609f09c6 WatchSource:0}: Error finding container 2371165fecc65970b28ae0b22b1ae4c23890196f9f6bc723dcfabd96609f09c6: Status 404 returned error can't find the container with id 2371165fecc65970b28ae0b22b1ae4c23890196f9f6bc723dcfabd96609f09c6
	Oct 02 21:47:17 multinode-629060 kubelet[1390]: E1002 21:47:17.641471    1390 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55426->127.0.0.1:35477: write tcp 127.0.0.1:55426->127.0.0.1:35477: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-629060 -n multinode-629060
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-629060 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (122.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3067820481.exe start -p running-upgrade-377130 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1002 22:06:19.180953 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3067820481.exe start -p running-upgrade-377130 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m51.184608438s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-377130 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-377130 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.090063564s)

                                                
                                                
-- stdout --
	* [running-upgrade-377130] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-377130 in cluster running-upgrade-377130
	* Pulling base image ...
	* Updating the running docker "running-upgrade-377130" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:07:59.837597 1183456 out.go:296] Setting OutFile to fd 1 ...
	I1002 22:07:59.837736 1183456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:07:59.837746 1183456 out.go:309] Setting ErrFile to fd 2...
	I1002 22:07:59.837752 1183456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:07:59.838017 1183456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 22:07:59.838417 1183456 out.go:303] Setting JSON to false
	I1002 22:07:59.839595 1183456 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17427,"bootTime":1696267053,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 22:07:59.839708 1183456 start.go:138] virtualization:  
	I1002 22:07:59.842623 1183456 out.go:177] * [running-upgrade-377130] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 22:07:59.845050 1183456 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 22:07:59.846596 1183456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:07:59.845386 1183456 notify.go:220] Checking for updates...
	I1002 22:07:59.851194 1183456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:07:59.853592 1183456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 22:07:59.856889 1183456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:07:59.859652 1183456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:07:59.863226 1183456 config.go:182] Loaded profile config "running-upgrade-377130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 22:07:59.865543 1183456 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 22:07:59.867604 1183456 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 22:07:59.913674 1183456 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 22:07:59.913787 1183456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:08:00.036464 1183456 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-02 22:08:00.023544007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:08:00.036719 1183456 docker.go:294] overlay module found
	I1002 22:08:00.040005 1183456 out.go:177] * Using the docker driver based on existing profile
	I1002 22:08:00.042017 1183456 start.go:298] selected driver: docker
	I1002 22:08:00.042045 1183456 start.go:902] validating driver "docker" against &{Name:running-upgrade-377130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-377130 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.43 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 22:08:00.042191 1183456 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:08:00.042978 1183456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:08:00.237542 1183456 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-02 22:08:00.215154995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:08:00.238407 1183456 cni.go:84] Creating CNI manager for ""
	I1002 22:08:00.238439 1183456 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:08:00.238453 1183456 start_flags.go:321] config:
	{Name:running-upgrade-377130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-377130 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.43 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 22:08:00.241141 1183456 out.go:177] * Starting control plane node running-upgrade-377130 in cluster running-upgrade-377130
	I1002 22:08:00.243329 1183456 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 22:08:00.245173 1183456 out.go:177] * Pulling base image ...
	I1002 22:08:00.247255 1183456 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1002 22:08:00.247394 1183456 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1002 22:08:00.286098 1183456 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1002 22:08:00.286121 1183456 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1002 22:08:00.325523 1183456 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1002 22:08:00.325681 1183456 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/running-upgrade-377130/config.json ...
	I1002 22:08:00.325995 1183456 cache.go:195] Successfully downloaded all kic artifacts
	I1002 22:08:00.326075 1183456 start.go:365] acquiring machines lock for running-upgrade-377130: {Name:mk8e06503ae79b0f1d178fb7f4b8ab72fdbcd117 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.326139 1183456 start.go:369] acquired machines lock for "running-upgrade-377130" in 37.366µs
	I1002 22:08:00.326153 1183456 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:08:00.326159 1183456 fix.go:54] fixHost starting: 
	I1002 22:08:00.326430 1183456 cli_runner.go:164] Run: docker container inspect running-upgrade-377130 --format={{.State.Status}}
	I1002 22:08:00.326702 1183456 cache.go:107] acquiring lock: {Name:mk828a58fff182971a82ba27f7f0d1f9658a0a29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.326793 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 22:08:00.326803 1183456 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.476µs
	I1002 22:08:00.326812 1183456 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 22:08:00.326820 1183456 cache.go:107] acquiring lock: {Name:mk57bf96569c09fe168ec1fb0058d1b2744351c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.326850 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1002 22:08:00.326855 1183456 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 36.726µs
	I1002 22:08:00.326872 1183456 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1002 22:08:00.326880 1183456 cache.go:107] acquiring lock: {Name:mka491d9888aed97f97d4ecaabf6aca59f840d40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.326906 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1002 22:08:00.326911 1183456 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.944µs
	I1002 22:08:00.326918 1183456 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1002 22:08:00.326929 1183456 cache.go:107] acquiring lock: {Name:mk92ffe3650e02e4534f0eb8faffd302ff8f1f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.326957 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1002 22:08:00.326961 1183456 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 35.372µs
	I1002 22:08:00.326968 1183456 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1002 22:08:00.326976 1183456 cache.go:107] acquiring lock: {Name:mkb3c2acda63bbab01db5c8dceb6574a52ff9d85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.327001 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1002 22:08:00.327006 1183456 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 30.507µs
	I1002 22:08:00.327013 1183456 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1002 22:08:00.327019 1183456 cache.go:107] acquiring lock: {Name:mkbe0c3870f8630be7dbc27575b7b58ed198ae78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.327044 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1002 22:08:00.327049 1183456 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 30.613µs
	I1002 22:08:00.327055 1183456 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1002 22:08:00.327064 1183456 cache.go:107] acquiring lock: {Name:mk8b14f4ccec47ae702a829037d0fc81a29408e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.327102 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1002 22:08:00.327107 1183456 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 43.848µs
	I1002 22:08:00.327114 1183456 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1002 22:08:00.327139 1183456 cache.go:107] acquiring lock: {Name:mke4bd636e55f1c34266bcf6f1138c0d3f8866c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:08:00.327165 1183456 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1002 22:08:00.327169 1183456 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 31.869µs
	I1002 22:08:00.327176 1183456 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1002 22:08:00.327182 1183456 cache.go:87] Successfully saved all images to host disk.
	I1002 22:08:00.356408 1183456 fix.go:102] recreateIfNeeded on running-upgrade-377130: state=Running err=<nil>
	W1002 22:08:00.356447 1183456 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 22:08:00.358878 1183456 out.go:177] * Updating the running docker "running-upgrade-377130" container ...
	I1002 22:08:00.361164 1183456 machine.go:88] provisioning docker machine ...
	I1002 22:08:00.361389 1183456 ubuntu.go:169] provisioning hostname "running-upgrade-377130"
	I1002 22:08:00.361524 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:00.392614 1183456 main.go:141] libmachine: Using SSH client type: native
	I1002 22:08:00.393050 1183456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33911 <nil> <nil>}
	I1002 22:08:00.393071 1183456 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-377130 && echo "running-upgrade-377130" | sudo tee /etc/hostname
	I1002 22:08:00.568626 1183456 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-377130
	
	I1002 22:08:00.568745 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:00.606779 1183456 main.go:141] libmachine: Using SSH client type: native
	I1002 22:08:00.607193 1183456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33911 <nil> <nil>}
	I1002 22:08:00.607220 1183456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-377130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-377130/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-377130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:08:00.771627 1183456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:08:00.771655 1183456 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 22:08:00.771692 1183456 ubuntu.go:177] setting up certificates
	I1002 22:08:00.771714 1183456 provision.go:83] configureAuth start
	I1002 22:08:00.771787 1183456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-377130
	I1002 22:08:00.800592 1183456 provision.go:138] copyHostCerts
	I1002 22:08:00.800678 1183456 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 22:08:00.800706 1183456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 22:08:00.800786 1183456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 22:08:00.800891 1183456 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 22:08:00.800900 1183456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 22:08:00.800927 1183456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 22:08:00.800985 1183456 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 22:08:00.800994 1183456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 22:08:00.801020 1183456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 22:08:00.801071 1183456 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-377130 san=[192.168.70.43 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-377130]
	I1002 22:08:01.082419 1183456 provision.go:172] copyRemoteCerts
	I1002 22:08:01.082491 1183456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:08:01.082541 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:01.104086 1183456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/running-upgrade-377130/id_rsa Username:docker}
	I1002 22:08:01.222529 1183456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:08:01.338609 1183456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 22:08:01.397553 1183456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 22:08:01.471225 1183456 provision.go:86] duration metric: configureAuth took 699.494902ms
	I1002 22:08:01.471261 1183456 ubuntu.go:193] setting minikube options for container-runtime
	I1002 22:08:01.471476 1183456 config.go:182] Loaded profile config "running-upgrade-377130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 22:08:01.471602 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:01.503117 1183456 main.go:141] libmachine: Using SSH client type: native
	I1002 22:08:01.503547 1183456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33911 <nil> <nil>}
	I1002 22:08:01.503569 1183456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:08:02.529023 1183456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:08:02.529045 1183456 machine.go:91] provisioned docker machine in 2.167669488s
	I1002 22:08:02.529056 1183456 start.go:300] post-start starting for "running-upgrade-377130" (driver="docker")
	I1002 22:08:02.529067 1183456 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:08:02.529131 1183456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:08:02.529181 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:02.556287 1183456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/running-upgrade-377130/id_rsa Username:docker}
	I1002 22:08:02.725387 1183456 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:08:02.731052 1183456 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 22:08:02.731081 1183456 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:08:02.731094 1183456 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 22:08:02.731101 1183456 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 22:08:02.731111 1183456 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 22:08:02.731170 1183456 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 22:08:02.731257 1183456 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 22:08:02.731365 1183456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:08:02.757627 1183456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:08:02.885549 1183456 start.go:303] post-start completed in 356.474363ms
	I1002 22:08:02.885658 1183456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:08:02.885731 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:02.950089 1183456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/running-upgrade-377130/id_rsa Username:docker}
	I1002 22:08:03.092947 1183456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:08:03.103452 1183456 fix.go:56] fixHost completed within 2.777281578s
	I1002 22:08:03.103478 1183456 start.go:83] releasing machines lock for "running-upgrade-377130", held for 2.777329955s
	I1002 22:08:03.103562 1183456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-377130
	I1002 22:08:03.148738 1183456 ssh_runner.go:195] Run: cat /version.json
	I1002 22:08:03.148802 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:03.150495 1183456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:08:03.150585 1183456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-377130
	I1002 22:08:03.204905 1183456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/running-upgrade-377130/id_rsa Username:docker}
	I1002 22:08:03.222938 1183456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33911 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/running-upgrade-377130/id_rsa Username:docker}
	W1002 22:08:03.690650 1183456 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 22:08:03.690773 1183456 ssh_runner.go:195] Run: systemctl --version
	I1002 22:08:03.710398 1183456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:08:04.261057 1183456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 22:08:04.297994 1183456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:08:04.425329 1183456 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 22:08:04.425477 1183456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:08:04.467733 1183456 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 22:08:04.467758 1183456 start.go:469] detecting cgroup driver to use...
	I1002 22:08:04.467789 1183456 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 22:08:04.467847 1183456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:08:04.544642 1183456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:08:04.563957 1183456 docker.go:197] disabling cri-docker service (if available) ...
	I1002 22:08:04.564024 1183456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:08:04.594370 1183456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:08:04.683716 1183456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1002 22:08:04.815985 1183456 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1002 22:08:04.816066 1183456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:08:05.134618 1183456 docker.go:213] disabling docker service ...
	I1002 22:08:05.134696 1183456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:08:05.181802 1183456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:08:05.204469 1183456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:08:05.459219 1183456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:08:05.758543 1183456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:08:05.779398 1183456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:08:05.813832 1183456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 22:08:05.813901 1183456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:08:05.852809 1183456 out.go:177] 
	W1002 22:08:05.854572 1183456 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 22:08:05.854606 1183456 out.go:239] * 
	* 
	W1002 22:08:05.855576 1183456 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:08:05.857896 1183456 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-377130 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-02 22:08:05.888215779 +0000 UTC m=+2728.278116823
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-377130
helpers_test.go:235: (dbg) docker inspect running-upgrade-377130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e512b8deb93634734623acdeb23f52f7a944f30f79cae775702a87f5037f7613",
	        "Created": "2023-10-02T22:06:33.498085801Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1175518,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T22:06:34.334571972Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/e512b8deb93634734623acdeb23f52f7a944f30f79cae775702a87f5037f7613/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e512b8deb93634734623acdeb23f52f7a944f30f79cae775702a87f5037f7613/hostname",
	        "HostsPath": "/var/lib/docker/containers/e512b8deb93634734623acdeb23f52f7a944f30f79cae775702a87f5037f7613/hosts",
	        "LogPath": "/var/lib/docker/containers/e512b8deb93634734623acdeb23f52f7a944f30f79cae775702a87f5037f7613/e512b8deb93634734623acdeb23f52f7a944f30f79cae775702a87f5037f7613-json.log",
	        "Name": "/running-upgrade-377130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-377130:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-377130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b1c78c6080f6fadda39fbd1563da912c204167d88a2c23870740a1cab7a71f31-init/diff:/var/lib/docker/overlay2/6bf059a92827dc1286d47d58a89bc15f44554c6a55e8c417c7faa2bbaf69d764/diff:/var/lib/docker/overlay2/0c0477232c4cb8f0f35de9211001ea1a7468c9f3751408087a22f89685589076/diff:/var/lib/docker/overlay2/68e9387eabe3a4fc1112f23f0997e2063d0ad3f53b9f991996b581c6b2e77241/diff:/var/lib/docker/overlay2/76e1735f5786c631dc12f6d29ccd078ff6e1a085199b739eeabc86ba23351092/diff:/var/lib/docker/overlay2/d4a2ce3c5fee7828954e410d83afdb2e4238d868ac99e2b75c4dd35ca9920d60/diff:/var/lib/docker/overlay2/57dd47b7e8800b4a33d2fc620bc242c638f6ef5cd444cdb53db01b5bd9a10b17/diff:/var/lib/docker/overlay2/77dce330a9fe502f58641dac61b371c9cdf7970e9a16b793ad755a7d9fef1d80/diff:/var/lib/docker/overlay2/039a79e69ab12a1e688334ea51ea9fb663bbb6f89d4f185648bfa69bd8e8a189/diff:/var/lib/docker/overlay2/8580ab19893a560b9cae21ec20baeb851205ba90b345c3cc0335cf1ebec91610/diff:/var/lib/docker/overlay2/9dc882
c938e17828014ad8fdf7bd46d2b35545940fbab0cb44eff0d67fca8765/diff:/var/lib/docker/overlay2/4eb09002f8683d8422b3c4d10ab13c19b5037dc00d1565622cf565f95fb54e75/diff:/var/lib/docker/overlay2/213f424ef49ccd1b6fe134d2e2c744a582d4cfbc948801d2c2ff9ca45c33c804/diff:/var/lib/docker/overlay2/c4b6d026c506054489cd8844531c9d2f577eb8c5048c9a464afdf76d61d63da1/diff:/var/lib/docker/overlay2/b9200664aaba16d1b5cf5385f5ec617d14ceeaa24b5992b4f149885a4190db92/diff:/var/lib/docker/overlay2/ec09735fbaae926611ee29a8a0871b98eb919fe8386baaebd40055959b19733c/diff:/var/lib/docker/overlay2/a65fcd0f491efd4795fdcbda5dd2d5a43d72ffa2d8d30b9a51baed9c6abcb27e/diff:/var/lib/docker/overlay2/e2bbf67c72284156573794296312225b1015fa84988431e22c29df36ca73d5b5/diff:/var/lib/docker/overlay2/45d1f56b0bb11e316023935dbb12f3b68c7b69a7e0c80eb4b03fe44e71dcc607/diff:/var/lib/docker/overlay2/f67c31f673bbf61c641887af4aa77a8e8fd1a4a748a26786733a647a2f7a7f1f/diff:/var/lib/docker/overlay2/6005d6b398bc6b2e7407c0137e6f9d39606b0a984594f6f1be68c4ac1cad8e65/diff:/var/lib/d
ocker/overlay2/d2ff79f69536ab6dbc81123828502b0455f0304d9c43da416a27642719a5070c/diff:/var/lib/docker/overlay2/efabcbabc36f1b65bd02638a9c87764a24074daa6da06053fde6852811992033/diff:/var/lib/docker/overlay2/cbaae45b3b110d34478ff4747c55458be15844068a4415832beb5680acb61d75/diff:/var/lib/docker/overlay2/2e80d2547e3006346696aeba2b3cddbdfac2faf7979a79c73b3e92ee10981636/diff:/var/lib/docker/overlay2/74bb1115d5e9573d0ccd2db26c57308bd8972c800253093a818cda747b0a8574/diff:/var/lib/docker/overlay2/a49238218367319b923ee4f0aa38fb7e4abf9ecb498a4fa01d4fb22425c63fb4/diff:/var/lib/docker/overlay2/b8070c5481f759ae23c9f9af8956426bf498bbf7a18662a081f31e7048abdfd2/diff:/var/lib/docker/overlay2/bf4b2a2bf3ad36a16d3199135e5f47204dddf6e1634cb00d0ed23f8464583a7b/diff:/var/lib/docker/overlay2/0ad095c0ce300507ebad7a294de7720c006a477a9ef548beb96486e3aa773a71/diff:/var/lib/docker/overlay2/a70524b2b4d521d645b7ef0fd67c8f7cd274832385d36cb890901ca90bdef10b/diff:/var/lib/docker/overlay2/9f268f71418be2a990544e7a6831cbfb0cefe592936f4c09c5f43469e63
a7a2f/diff:/var/lib/docker/overlay2/affba4c2476e6e9b05e5f0aef43d2bad1ac41d718554a855479ce4376569821a/diff:/var/lib/docker/overlay2/26705fcbd9ca4e6be9ae5a4d14172a1eda03d7cdb91ddf25042e9ff590874d52/diff:/var/lib/docker/overlay2/ba69714d13e3b230892c0d5b6890cf8b927dcec593226e92ad50daa6f64f5756/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b1c78c6080f6fadda39fbd1563da912c204167d88a2c23870740a1cab7a71f31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b1c78c6080f6fadda39fbd1563da912c204167d88a2c23870740a1cab7a71f31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b1c78c6080f6fadda39fbd1563da912c204167d88a2c23870740a1cab7a71f31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-377130",
	                "Source": "/var/lib/docker/volumes/running-upgrade-377130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-377130",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-377130",
	                "name.minikube.sigs.k8s.io": "running-upgrade-377130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36425de61fa95fc355002b71bf7e8da5bdfeb8c0e3f912de507cc64d84f5afa3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33911"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33910"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33909"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33908"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/36425de61fa9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-377130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.43"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e512b8deb936",
	                        "running-upgrade-377130"
	                    ],
	                    "NetworkID": "7fb0bb33e3dda05306d58356099b91aae917720a6d20ce8f620726014af6094d",
	                    "EndpointID": "fac5a3b49e3cdb44e88b54ed63e8606320d269c27416e4952ff83b29fe4e45b9",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.43",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:2b",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-377130 -n running-upgrade-377130
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-377130 -n running-upgrade-377130: exit status 4 (471.937869ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 22:08:06.318933 1184321 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-377130" does not appear in /home/jenkins/minikube-integration/17323-1042317/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-377130" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-377130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-377130
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-377130: (3.458431709s)
--- FAIL: TestRunningBinaryUpgrade (122.47s)

                                                
                                    
x
+
TestMissingContainerUpgrade (138.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2529398226.exe start -p missing-upgrade-123767 --memory=2200 --driver=docker  --container-runtime=crio
E1002 21:58:26.868917 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.2529398226.exe start -p missing-upgrade-123767 --memory=2200 --driver=docker  --container-runtime=crio: (1m36.099451822s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-123767
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-123767: (2.041782548s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-123767
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-123767 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-123767 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (35.727456906s)

                                                
                                                
-- stdout --
	* [missing-upgrade-123767] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-123767 in cluster missing-upgrade-123767
	* Pulling base image ...
	* docker "missing-upgrade-123767" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:59:36.953657 1154257 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:59:36.954234 1154257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:59:36.958086 1154257 out.go:309] Setting ErrFile to fd 2...
	I1002 21:59:36.958145 1154257 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:59:36.958611 1154257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:59:36.959158 1154257 out.go:303] Setting JSON to false
	I1002 21:59:36.960330 1154257 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16924,"bootTime":1696267053,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:59:36.960655 1154257 start.go:138] virtualization:  
	I1002 21:59:36.964160 1154257 out.go:177] * [missing-upgrade-123767] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:59:36.966514 1154257 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 21:59:36.968542 1154257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:59:36.966723 1154257 notify.go:220] Checking for updates...
	I1002 21:59:36.973350 1154257 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:59:36.975340 1154257 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:59:36.977327 1154257 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:59:36.979605 1154257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:59:36.981979 1154257 config.go:182] Loaded profile config "missing-upgrade-123767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 21:59:36.984513 1154257 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 21:59:36.986557 1154257 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:59:37.024646 1154257 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:59:37.024867 1154257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:59:37.166117 1154257 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 21:59:37.154989696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:59:37.166223 1154257 docker.go:294] overlay module found
	I1002 21:59:37.168195 1154257 out.go:177] * Using the docker driver based on existing profile
	I1002 21:59:37.169761 1154257 start.go:298] selected driver: docker
	I1002 21:59:37.169775 1154257 start.go:902] validating driver "docker" against &{Name:missing-upgrade-123767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-123767 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.120 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 21:59:37.169884 1154257 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:59:37.170500 1154257 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:59:37.287610 1154257 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 21:59:37.276373764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:59:37.287899 1154257 cni.go:84] Creating CNI manager for ""
	I1002 21:59:37.287911 1154257 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:59:37.287923 1154257 start_flags.go:321] config:
	{Name:missing-upgrade-123767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-123767 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.120 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 21:59:37.290109 1154257 out.go:177] * Starting control plane node missing-upgrade-123767 in cluster missing-upgrade-123767
	I1002 21:59:37.292094 1154257 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:59:37.293716 1154257 out.go:177] * Pulling base image ...
	I1002 21:59:37.295593 1154257 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1002 21:59:37.295779 1154257 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1002 21:59:37.317853 1154257 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1002 21:59:37.318024 1154257 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1002 21:59:37.319008 1154257 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1002 21:59:37.373896 1154257 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1002 21:59:37.374061 1154257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/missing-upgrade-123767/config.json ...
	I1002 21:59:37.374427 1154257 cache.go:107] acquiring lock: {Name:mk828a58fff182971a82ba27f7f0d1f9658a0a29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.374515 1154257 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 21:59:37.374529 1154257 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.202µs
	I1002 21:59:37.374538 1154257 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 21:59:37.374549 1154257 cache.go:107] acquiring lock: {Name:mk57bf96569c09fe168ec1fb0058d1b2744351c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.374624 1154257 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1002 21:59:37.374971 1154257 cache.go:107] acquiring lock: {Name:mkb3c2acda63bbab01db5c8dceb6574a52ff9d85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.375129 1154257 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1002 21:59:37.375412 1154257 cache.go:107] acquiring lock: {Name:mka491d9888aed97f97d4ecaabf6aca59f840d40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.375572 1154257 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1002 21:59:37.375815 1154257 cache.go:107] acquiring lock: {Name:mk92ffe3650e02e4534f0eb8faffd302ff8f1f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.375924 1154257 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1002 21:59:37.376103 1154257 cache.go:107] acquiring lock: {Name:mkbe0c3870f8630be7dbc27575b7b58ed198ae78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.381393 1154257 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1002 21:59:37.381733 1154257 cache.go:107] acquiring lock: {Name:mk8b14f4ccec47ae702a829037d0fc81a29408e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.381846 1154257 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1002 21:59:37.382087 1154257 cache.go:107] acquiring lock: {Name:mke4bd636e55f1c34266bcf6f1138c0d3f8866c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:37.382185 1154257 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1002 21:59:37.384277 1154257 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1002 21:59:37.385276 1154257 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1002 21:59:37.385734 1154257 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1002 21:59:37.386150 1154257 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1002 21:59:37.386982 1154257 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1002 21:59:37.387489 1154257 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1002 21:59:37.388532 1154257 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	W1002 21:59:37.851650 1154257 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1002 21:59:37.851778 1154257 cache.go:162] opening:  /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1002 21:59:37.864148 1154257 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1002 21:59:37.864212 1154257 cache.go:162] opening:  /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1002 21:59:37.880152 1154257 cache.go:162] opening:  /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1002 21:59:37.888316 1154257 cache.go:162] opening:  /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1002 21:59:37.898063 1154257 cache.go:162] opening:  /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1002 21:59:37.916107 1154257 cache.go:162] opening:  /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W1002 21:59:37.916623 1154257 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1002 21:59:37.916689 1154257 cache.go:162] opening:  /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1002 21:59:37.994241 1154257 cache.go:157] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1002 21:59:37.994265 1154257 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 618.166374ms
	I1002 21:59:37.994277 1154257 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  881.34 KiB / 287.99 MiB [] 0.30% ? p/s ?I1002 21:59:38.270551 1154257 cache.go:157] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1002 21:59:38.270636 1154257 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 888.550912ms
	I1002 21:59:38.270664 1154257 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  16.02 MiB / 287.99 MiB [>] 5.56% ? p/s ?I1002 21:59:38.438041 1154257 cache.go:157] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1002 21:59:38.438136 1154257 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.062323177s
	I1002 21:59:38.438163 1154257 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1002 21:59:38.567735 1154257 cache.go:157] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1002 21:59:38.567760 1154257 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.192351675s
	I1002 21:59:38.567774 1154257 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.10 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.10 MiB I1002 21:59:38.819414 1154257 cache.go:157] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1002 21:59:38.819441 1154257 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.444892188s
	I1002 21:59:38.819455 1154257 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 42.10 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 39.39 MiB     > gcr.io/k8s-minikube/kicbase...:  31.30 MiB / 287.99 MiB  10.87% 39.39 MiBI1002 21:59:39.591079 1154257 cache.go:157] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1002 21:59:39.591165 1154257 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.21619781s
	I1002 21:59:39.591203 1154257 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  41.96 MiB / 287.99 MiB  14.57% 39.39 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 38.77 MiB    > gcr.io/k8s-minikube/kicbase...:  51.87 MiB / 287.99 MiB  18.01% 38.77 MiB    > gcr.io/k8s-minikube/kicbase...:  66.08 MiB / 287.99 MiB  22.94% 38.77 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 38.84 MiB    > gcr.io/k8s-minikube/kicbase...:  76.34 MiB / 287.99 MiB  26.51% 38.84 MiBI1002 21:59:40.757126 1154257 cache.go:157] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1002 21:59:40.757153 1154257 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.37542602s
	I1002 21:59:40.757166 1154257 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1002 21:59:40.757185 1154257 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  88.04 MiB / 287.99 MiB  30.57% 38.84 MiB    > gcr.io/k8s-minikube/kicbase...:  99.79 MiB / 287.99 MiB  34.65% 39.78 MiB    > gcr.io/k8s-minikube/kicbase...:  108.53 MiB / 287.99 MiB  37.68% 39.78 Mi    > gcr.io/k8s-minikube/kicbase...:  122.56 MiB / 287.99 MiB  42.56% 39.78 Mi    > gcr.io/k8s-minikube/kicbase...:  131.79 MiB / 287.99 MiB  45.76% 40.65 Mi    > gcr.io/k8s-minikube/kicbase...:  145.11 MiB / 287.99 MiB  50.39% 40.65 Mi    > gcr.io/k8s-minikube/kicbase...:  155.93 MiB / 287.99 MiB  54.15% 40.65 Mi    > gcr.io/k8s-minikube/kicbase...:  170.67 MiB / 287.99 MiB  59.26% 42.21 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 42.21 Mi    > gcr.io/k8s-minikube/kicbase...:  181.21 MiB / 287.99 MiB  62.92% 42.21 Mi    > gcr.io/k8s-minikube/kicbase...:  195.73 MiB / 287.99 MiB  67.96% 42.18 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 42.18 Mi    > gcr.io/k8s-minikube/kicbase...:  209.76 MiB / 287.99 MiB  72.
84% 42.18 Mi    > gcr.io/k8s-minikube/kicbase...:  220.47 MiB / 287.99 MiB  76.55% 42.12 Mi    > gcr.io/k8s-minikube/kicbase...:  234.70 MiB / 287.99 MiB  81.50% 42.12 Mi    > gcr.io/k8s-minikube/kicbase...:  238.07 MiB / 287.99 MiB  82.67% 42.12 Mi    > gcr.io/k8s-minikube/kicbase...:  254.04 MiB / 287.99 MiB  88.21% 43.02 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 43.02 Mi    > gcr.io/k8s-minikube/kicbase...:  265.65 MiB / 287.99 MiB  92.24% 43.02 Mi    > gcr.io/k8s-minikube/kicbase...:  281.05 MiB / 287.99 MiB  97.59% 43.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 43.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 43.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 41.10 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.10 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.10 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB
99.99% 38.45 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 38.45 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 38.45 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 34.67 MI1002 21:59:46.301335 1154257 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1002 21:59:46.301368 1154257 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1002 21:59:47.398480 1154257 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1002 21:59:47.398520 1154257 cache.go:195] Successfully downloaded all kic artifacts
	I1002 21:59:47.398580 1154257 start.go:365] acquiring machines lock for missing-upgrade-123767: {Name:mk801a3895da0a0a28e61435681c1034880468ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:47.398656 1154257 start.go:369] acquired machines lock for "missing-upgrade-123767" in 51.438µs
	I1002 21:59:47.398684 1154257 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:59:47.398695 1154257 fix.go:54] fixHost starting: 
	I1002 21:59:47.398975 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:47.415028 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:47.415090 1154257 fix.go:102] recreateIfNeeded on missing-upgrade-123767: state= err=unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:47.415118 1154257 fix.go:107] machineExists: false. err=machine does not exist
	I1002 21:59:47.417075 1154257 out.go:177] * docker "missing-upgrade-123767" container is missing, will recreate.
	I1002 21:59:47.418802 1154257 delete.go:124] DEMOLISHING missing-upgrade-123767 ...
	I1002 21:59:47.418904 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:47.435221 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	W1002 21:59:47.435283 1154257 stop.go:75] unable to get state: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:47.435303 1154257 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:47.435758 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:47.451541 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:47.451607 1154257 delete.go:82] Unable to get host status for missing-upgrade-123767, assuming it has already been deleted: state: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:47.451674 1154257 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-123767
	W1002 21:59:47.468394 1154257 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-123767 returned with exit code 1
	I1002 21:59:47.468432 1154257 kic.go:367] could not find the container missing-upgrade-123767 to remove it. will try anyways
	I1002 21:59:47.468490 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:47.485898 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	W1002 21:59:47.485959 1154257 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:47.486029 1154257 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-123767 /bin/bash -c "sudo init 0"
	W1002 21:59:47.503674 1154257 cli_runner.go:211] docker exec --privileged -t missing-upgrade-123767 /bin/bash -c "sudo init 0" returned with exit code 1
	I1002 21:59:47.503711 1154257 oci.go:647] error shutdown missing-upgrade-123767: docker exec --privileged -t missing-upgrade-123767 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:48.503893 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:48.520757 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:48.520818 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:48.520835 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 21:59:48.520865 1154257 retry.go:31] will retry after 588.920767ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:49.110342 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:49.127162 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:49.127225 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:49.127239 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 21:59:49.127267 1154257 retry.go:31] will retry after 644.219423ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:49.772095 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:49.789535 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:49.789599 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:49.789614 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 21:59:49.789638 1154257 retry.go:31] will retry after 1.307894829s: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:51.098190 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:51.115579 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:51.115645 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:51.115658 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 21:59:51.115687 1154257 retry.go:31] will retry after 1.044267378s: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:52.160192 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:52.176857 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:52.176919 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:52.176929 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 21:59:52.176953 1154257 retry.go:31] will retry after 2.217933868s: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:54.396382 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:54.413658 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:54.413726 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:54.413740 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 21:59:54.413765 1154257 retry.go:31] will retry after 2.739895606s: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:57.153930 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 21:59:57.173635 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 21:59:57.173694 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 21:59:57.173708 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 21:59:57.173733 1154257 retry.go:31] will retry after 7.525913233s: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 22:00:04.702179 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 22:00:04.735917 1154257 cli_runner.go:211] docker container inspect missing-upgrade-123767 --format={{.State.Status}} returned with exit code 1
	I1002 22:00:04.735982 1154257 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	I1002 22:00:04.735996 1154257 oci.go:661] temporary error: container missing-upgrade-123767 status is  but expect it to be exited
	I1002 22:00:04.736030 1154257 oci.go:88] couldn't shut down missing-upgrade-123767 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-123767": docker container inspect missing-upgrade-123767 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-123767
	 
	I1002 22:00:04.736094 1154257 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-123767
	I1002 22:00:04.769593 1154257 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-123767
	W1002 22:00:04.794667 1154257 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-123767 returned with exit code 1
	I1002 22:00:04.794790 1154257 cli_runner.go:164] Run: docker network inspect missing-upgrade-123767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:00:04.821240 1154257 cli_runner.go:164] Run: docker network rm missing-upgrade-123767
	I1002 22:00:04.933861 1154257 fix.go:114] Sleeping 1 second for extra luck!
	I1002 22:00:05.934055 1154257 start.go:125] createHost starting for "" (driver="docker")
	I1002 22:00:05.938183 1154257 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1002 22:00:05.938350 1154257 start.go:159] libmachine.API.Create for "missing-upgrade-123767" (driver="docker")
	I1002 22:00:05.938380 1154257 client.go:168] LocalClient.Create starting
	I1002 22:00:05.938459 1154257 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem
	I1002 22:00:05.938496 1154257 main.go:141] libmachine: Decoding PEM data...
	I1002 22:00:05.938510 1154257 main.go:141] libmachine: Parsing certificate...
	I1002 22:00:05.938566 1154257 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem
	I1002 22:00:05.938583 1154257 main.go:141] libmachine: Decoding PEM data...
	I1002 22:00:05.938594 1154257 main.go:141] libmachine: Parsing certificate...
	I1002 22:00:05.938867 1154257 cli_runner.go:164] Run: docker network inspect missing-upgrade-123767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 22:00:05.956575 1154257 cli_runner.go:211] docker network inspect missing-upgrade-123767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 22:00:05.956655 1154257 network_create.go:281] running [docker network inspect missing-upgrade-123767] to gather additional debugging logs...
	I1002 22:00:05.956675 1154257 cli_runner.go:164] Run: docker network inspect missing-upgrade-123767
	W1002 22:00:05.973864 1154257 cli_runner.go:211] docker network inspect missing-upgrade-123767 returned with exit code 1
	I1002 22:00:05.973898 1154257 network_create.go:284] error running [docker network inspect missing-upgrade-123767]: docker network inspect missing-upgrade-123767: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-123767 not found
	I1002 22:00:05.973913 1154257 network_create.go:286] output of [docker network inspect missing-upgrade-123767]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-123767 not found
	
	** /stderr **
	I1002 22:00:05.974019 1154257 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:00:05.992409 1154257 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5e0177270a4f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ae:54:66:71} reservation:<nil>}
	I1002 22:00:05.992912 1154257 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6756a4f4c689 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:77:f4:6c:68} reservation:<nil>}
	I1002 22:00:05.993474 1154257 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8c24a6fb6255 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:55:77:f7:dc} reservation:<nil>}
	I1002 22:00:05.994062 1154257 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000cfe1d0}
	I1002 22:00:05.994086 1154257 network_create.go:124] attempt to create docker network missing-upgrade-123767 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1002 22:00:05.994158 1154257 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-123767 missing-upgrade-123767
	I1002 22:00:06.090491 1154257 network_create.go:108] docker network missing-upgrade-123767 192.168.76.0/24 created
	I1002 22:00:06.090526 1154257 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-123767" container
	I1002 22:00:06.090604 1154257 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 22:00:06.110630 1154257 cli_runner.go:164] Run: docker volume create missing-upgrade-123767 --label name.minikube.sigs.k8s.io=missing-upgrade-123767 --label created_by.minikube.sigs.k8s.io=true
	I1002 22:00:06.128147 1154257 oci.go:103] Successfully created a docker volume missing-upgrade-123767
	I1002 22:00:06.128235 1154257 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-123767-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123767 --entrypoint /usr/bin/test -v missing-upgrade-123767:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1002 22:00:06.657586 1154257 oci.go:107] Successfully prepared a docker volume missing-upgrade-123767
	I1002 22:00:06.657627 1154257 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1002 22:00:06.657775 1154257 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 22:00:06.657884 1154257 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 22:00:06.726379 1154257 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-123767 --name missing-upgrade-123767 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-123767 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-123767 --network missing-upgrade-123767 --ip 192.168.76.2 --volume missing-upgrade-123767:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1002 22:00:07.081477 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Running}}
	I1002 22:00:07.111712 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	I1002 22:00:07.133985 1154257 cli_runner.go:164] Run: docker exec missing-upgrade-123767 stat /var/lib/dpkg/alternatives/iptables
	I1002 22:00:07.217437 1154257 oci.go:144] the created container "missing-upgrade-123767" has a running status.
	I1002 22:00:07.217467 1154257 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa...
	I1002 22:00:07.658540 1154257 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 22:00:07.687532 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	I1002 22:00:07.710527 1154257 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 22:00:07.710551 1154257 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-123767 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 22:00:07.789530 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	I1002 22:00:07.818665 1154257 machine.go:88] provisioning docker machine ...
	I1002 22:00:07.818697 1154257 ubuntu.go:169] provisioning hostname "missing-upgrade-123767"
	I1002 22:00:07.818770 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:07.844749 1154257 main.go:141] libmachine: Using SSH client type: native
	I1002 22:00:07.845189 1154257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33889 <nil> <nil>}
	I1002 22:00:07.845240 1154257 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-123767 && echo "missing-upgrade-123767" | sudo tee /etc/hostname
	I1002 22:00:08.005073 1154257 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-123767
	
	I1002 22:00:08.005267 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:08.039712 1154257 main.go:141] libmachine: Using SSH client type: native
	I1002 22:00:08.040131 1154257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33889 <nil> <nil>}
	I1002 22:00:08.040162 1154257 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-123767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-123767/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-123767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:00:08.188071 1154257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:00:08.188149 1154257 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 22:00:08.188208 1154257 ubuntu.go:177] setting up certificates
	I1002 22:00:08.188235 1154257 provision.go:83] configureAuth start
	I1002 22:00:08.188336 1154257 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-123767
	I1002 22:00:08.217776 1154257 provision.go:138] copyHostCerts
	I1002 22:00:08.217836 1154257 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 22:00:08.217849 1154257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 22:00:08.217931 1154257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 22:00:08.218043 1154257 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 22:00:08.218053 1154257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 22:00:08.218092 1154257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 22:00:08.218171 1154257 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 22:00:08.218179 1154257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 22:00:08.218207 1154257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 22:00:08.218611 1154257 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-123767 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-123767]
	I1002 22:00:08.528259 1154257 provision.go:172] copyRemoteCerts
	I1002 22:00:08.528326 1154257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:00:08.528369 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:08.547220 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	I1002 22:00:08.646471 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 22:00:08.670529 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 22:00:08.693849 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:00:08.716043 1154257 provision.go:86] duration metric: configureAuth took 527.780605ms
	I1002 22:00:08.716067 1154257 ubuntu.go:193] setting minikube options for container-runtime
	I1002 22:00:08.716260 1154257 config.go:182] Loaded profile config "missing-upgrade-123767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 22:00:08.716363 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:08.734212 1154257 main.go:141] libmachine: Using SSH client type: native
	I1002 22:00:08.734633 1154257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33889 <nil> <nil>}
	I1002 22:00:08.734656 1154257 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:00:09.169439 1154257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:00:09.169511 1154257 machine.go:91] provisioned docker machine in 1.350813649s
	I1002 22:00:09.169536 1154257 client.go:171] LocalClient.Create took 3.231149383s
	I1002 22:00:09.169571 1154257 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-123767" took 3.231223277s
	I1002 22:00:09.169610 1154257 start.go:300] post-start starting for "missing-upgrade-123767" (driver="docker")
	I1002 22:00:09.169635 1154257 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:00:09.169725 1154257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:00:09.169797 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:09.191537 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	I1002 22:00:09.291103 1154257 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:00:09.295572 1154257 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 22:00:09.295601 1154257 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:00:09.295617 1154257 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 22:00:09.295626 1154257 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 22:00:09.295637 1154257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 22:00:09.295730 1154257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 22:00:09.295815 1154257 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 22:00:09.295953 1154257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:00:09.305104 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:00:09.330118 1154257 start.go:303] post-start completed in 160.47758ms
	I1002 22:00:09.330561 1154257 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-123767
	I1002 22:00:09.348625 1154257 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/missing-upgrade-123767/config.json ...
	I1002 22:00:09.348916 1154257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:00:09.348969 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:09.369184 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	I1002 22:00:09.467487 1154257 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:00:09.473318 1154257 start.go:128] duration metric: createHost completed in 3.539224382s
	I1002 22:00:09.473407 1154257 cli_runner.go:164] Run: docker container inspect missing-upgrade-123767 --format={{.State.Status}}
	W1002 22:00:09.491459 1154257 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 22:00:09.491485 1154257 machine.go:88] provisioning docker machine ...
	I1002 22:00:09.491500 1154257 ubuntu.go:169] provisioning hostname "missing-upgrade-123767"
	I1002 22:00:09.491560 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:09.510956 1154257 main.go:141] libmachine: Using SSH client type: native
	I1002 22:00:09.511363 1154257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33889 <nil> <nil>}
	I1002 22:00:09.511375 1154257 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-123767 && echo "missing-upgrade-123767" | sudo tee /etc/hostname
	I1002 22:00:09.674540 1154257 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-123767
	
	I1002 22:00:09.674621 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:09.703276 1154257 main.go:141] libmachine: Using SSH client type: native
	I1002 22:00:09.703684 1154257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33889 <nil> <nil>}
	I1002 22:00:09.703708 1154257 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-123767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-123767/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-123767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:00:09.862371 1154257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:00:09.862401 1154257 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 22:00:09.862419 1154257 ubuntu.go:177] setting up certificates
	I1002 22:00:09.862469 1154257 provision.go:83] configureAuth start
	I1002 22:00:09.862530 1154257 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-123767
	I1002 22:00:09.888154 1154257 provision.go:138] copyHostCerts
	I1002 22:00:09.888219 1154257 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 22:00:09.888231 1154257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 22:00:09.888308 1154257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 22:00:09.888422 1154257 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 22:00:09.888435 1154257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 22:00:09.888466 1154257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 22:00:09.888526 1154257 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 22:00:09.888539 1154257 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 22:00:09.888566 1154257 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 22:00:09.888611 1154257 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-123767 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-123767]
	I1002 22:00:10.628577 1154257 provision.go:172] copyRemoteCerts
	I1002 22:00:10.628687 1154257 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:00:10.628769 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:10.652415 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	I1002 22:00:10.755747 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 22:00:10.783382 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 22:00:10.808583 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:00:10.838114 1154257 provision.go:86] duration metric: configureAuth took 975.626552ms
	I1002 22:00:10.838184 1154257 ubuntu.go:193] setting minikube options for container-runtime
	I1002 22:00:10.838409 1154257 config.go:182] Loaded profile config "missing-upgrade-123767": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 22:00:10.838561 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:10.858789 1154257 main.go:141] libmachine: Using SSH client type: native
	I1002 22:00:10.859213 1154257 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33889 <nil> <nil>}
	I1002 22:00:10.859236 1154257 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:00:11.177867 1154257 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:00:11.177936 1154257 machine.go:91] provisioned docker machine in 1.686442785s
	I1002 22:00:11.177960 1154257 start.go:300] post-start starting for "missing-upgrade-123767" (driver="docker")
	I1002 22:00:11.177983 1154257 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:00:11.178096 1154257 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:00:11.178170 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:11.206848 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	I1002 22:00:11.307142 1154257 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:00:11.313574 1154257 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 22:00:11.313618 1154257 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:00:11.313640 1154257 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 22:00:11.313660 1154257 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 22:00:11.313685 1154257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 22:00:11.313778 1154257 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 22:00:11.313882 1154257 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 22:00:11.314048 1154257 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:00:11.326351 1154257 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:00:11.359372 1154257 start.go:303] post-start completed in 181.354577ms
	I1002 22:00:11.359488 1154257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:00:11.359557 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:11.392455 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	I1002 22:00:11.494749 1154257 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:00:11.501198 1154257 fix.go:56] fixHost completed within 24.102497669s
	I1002 22:00:11.501225 1154257 start.go:83] releasing machines lock for "missing-upgrade-123767", held for 24.10255444s
	I1002 22:00:11.501299 1154257 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-123767
	I1002 22:00:11.529055 1154257 ssh_runner.go:195] Run: cat /version.json
	I1002 22:00:11.529115 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:11.529368 1154257 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:00:11.529432 1154257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-123767
	I1002 22:00:11.567713 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	I1002 22:00:11.571382 1154257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33889 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/missing-upgrade-123767/id_rsa Username:docker}
	W1002 22:00:11.670505 1154257 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 22:00:11.670588 1154257 ssh_runner.go:195] Run: systemctl --version
	I1002 22:00:11.789893 1154257 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:00:11.894727 1154257 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 22:00:11.900349 1154257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:00:11.929273 1154257 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 22:00:11.929357 1154257 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:00:11.986137 1154257 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 22:00:11.986214 1154257 start.go:469] detecting cgroup driver to use...
	I1002 22:00:11.986278 1154257 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 22:00:11.986363 1154257 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:00:12.031421 1154257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:00:12.046778 1154257 docker.go:197] disabling cri-docker service (if available) ...
	I1002 22:00:12.046899 1154257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:00:12.062323 1154257 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:00:12.077342 1154257 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1002 22:00:12.097990 1154257 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1002 22:00:12.098113 1154257 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:00:12.240359 1154257 docker.go:213] disabling docker service ...
	I1002 22:00:12.240465 1154257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:00:12.259801 1154257 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:00:12.279633 1154257 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:00:12.416712 1154257 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:00:12.560332 1154257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:00:12.576011 1154257 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:00:12.595445 1154257 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 22:00:12.595518 1154257 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:00:12.609481 1154257 out.go:177] 
	W1002 22:00:12.611513 1154257 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 22:00:12.611535 1154257 out.go:239] * 
	* 
	W1002 22:00:12.612546 1154257 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:00:12.615103 1154257 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-123767 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-10-02 22:00:12.727363982 +0000 UTC m=+2255.117265026
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-123767
helpers_test.go:235: (dbg) docker inspect missing-upgrade-123767:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "24e749f999498abb5f7b97bcde30ffbde2aadbf9f3e4ec5439b551faa8a03b26",
	        "Created": "2023-10-02T22:00:06.74280106Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1155470,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T22:00:07.073602556Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/24e749f999498abb5f7b97bcde30ffbde2aadbf9f3e4ec5439b551faa8a03b26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/24e749f999498abb5f7b97bcde30ffbde2aadbf9f3e4ec5439b551faa8a03b26/hostname",
	        "HostsPath": "/var/lib/docker/containers/24e749f999498abb5f7b97bcde30ffbde2aadbf9f3e4ec5439b551faa8a03b26/hosts",
	        "LogPath": "/var/lib/docker/containers/24e749f999498abb5f7b97bcde30ffbde2aadbf9f3e4ec5439b551faa8a03b26/24e749f999498abb5f7b97bcde30ffbde2aadbf9f3e4ec5439b551faa8a03b26-json.log",
	        "Name": "/missing-upgrade-123767",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-123767:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-123767",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2cd8cc08820cafde88a27de023e929b01619d5d1f2150409292b37eb3c922eeb-init/diff:/var/lib/docker/overlay2/6bf059a92827dc1286d47d58a89bc15f44554c6a55e8c417c7faa2bbaf69d764/diff:/var/lib/docker/overlay2/0c0477232c4cb8f0f35de9211001ea1a7468c9f3751408087a22f89685589076/diff:/var/lib/docker/overlay2/68e9387eabe3a4fc1112f23f0997e2063d0ad3f53b9f991996b581c6b2e77241/diff:/var/lib/docker/overlay2/76e1735f5786c631dc12f6d29ccd078ff6e1a085199b739eeabc86ba23351092/diff:/var/lib/docker/overlay2/d4a2ce3c5fee7828954e410d83afdb2e4238d868ac99e2b75c4dd35ca9920d60/diff:/var/lib/docker/overlay2/57dd47b7e8800b4a33d2fc620bc242c638f6ef5cd444cdb53db01b5bd9a10b17/diff:/var/lib/docker/overlay2/77dce330a9fe502f58641dac61b371c9cdf7970e9a16b793ad755a7d9fef1d80/diff:/var/lib/docker/overlay2/039a79e69ab12a1e688334ea51ea9fb663bbb6f89d4f185648bfa69bd8e8a189/diff:/var/lib/docker/overlay2/8580ab19893a560b9cae21ec20baeb851205ba90b345c3cc0335cf1ebec91610/diff:/var/lib/docker/overlay2/9dc882
c938e17828014ad8fdf7bd46d2b35545940fbab0cb44eff0d67fca8765/diff:/var/lib/docker/overlay2/4eb09002f8683d8422b3c4d10ab13c19b5037dc00d1565622cf565f95fb54e75/diff:/var/lib/docker/overlay2/213f424ef49ccd1b6fe134d2e2c744a582d4cfbc948801d2c2ff9ca45c33c804/diff:/var/lib/docker/overlay2/c4b6d026c506054489cd8844531c9d2f577eb8c5048c9a464afdf76d61d63da1/diff:/var/lib/docker/overlay2/b9200664aaba16d1b5cf5385f5ec617d14ceeaa24b5992b4f149885a4190db92/diff:/var/lib/docker/overlay2/ec09735fbaae926611ee29a8a0871b98eb919fe8386baaebd40055959b19733c/diff:/var/lib/docker/overlay2/a65fcd0f491efd4795fdcbda5dd2d5a43d72ffa2d8d30b9a51baed9c6abcb27e/diff:/var/lib/docker/overlay2/e2bbf67c72284156573794296312225b1015fa84988431e22c29df36ca73d5b5/diff:/var/lib/docker/overlay2/45d1f56b0bb11e316023935dbb12f3b68c7b69a7e0c80eb4b03fe44e71dcc607/diff:/var/lib/docker/overlay2/f67c31f673bbf61c641887af4aa77a8e8fd1a4a748a26786733a647a2f7a7f1f/diff:/var/lib/docker/overlay2/6005d6b398bc6b2e7407c0137e6f9d39606b0a984594f6f1be68c4ac1cad8e65/diff:/var/lib/d
ocker/overlay2/d2ff79f69536ab6dbc81123828502b0455f0304d9c43da416a27642719a5070c/diff:/var/lib/docker/overlay2/efabcbabc36f1b65bd02638a9c87764a24074daa6da06053fde6852811992033/diff:/var/lib/docker/overlay2/cbaae45b3b110d34478ff4747c55458be15844068a4415832beb5680acb61d75/diff:/var/lib/docker/overlay2/2e80d2547e3006346696aeba2b3cddbdfac2faf7979a79c73b3e92ee10981636/diff:/var/lib/docker/overlay2/74bb1115d5e9573d0ccd2db26c57308bd8972c800253093a818cda747b0a8574/diff:/var/lib/docker/overlay2/a49238218367319b923ee4f0aa38fb7e4abf9ecb498a4fa01d4fb22425c63fb4/diff:/var/lib/docker/overlay2/b8070c5481f759ae23c9f9af8956426bf498bbf7a18662a081f31e7048abdfd2/diff:/var/lib/docker/overlay2/bf4b2a2bf3ad36a16d3199135e5f47204dddf6e1634cb00d0ed23f8464583a7b/diff:/var/lib/docker/overlay2/0ad095c0ce300507ebad7a294de7720c006a477a9ef548beb96486e3aa773a71/diff:/var/lib/docker/overlay2/a70524b2b4d521d645b7ef0fd67c8f7cd274832385d36cb890901ca90bdef10b/diff:/var/lib/docker/overlay2/9f268f71418be2a990544e7a6831cbfb0cefe592936f4c09c5f43469e63
a7a2f/diff:/var/lib/docker/overlay2/affba4c2476e6e9b05e5f0aef43d2bad1ac41d718554a855479ce4376569821a/diff:/var/lib/docker/overlay2/26705fcbd9ca4e6be9ae5a4d14172a1eda03d7cdb91ddf25042e9ff590874d52/diff:/var/lib/docker/overlay2/ba69714d13e3b230892c0d5b6890cf8b927dcec593226e92ad50daa6f64f5756/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2cd8cc08820cafde88a27de023e929b01619d5d1f2150409292b37eb3c922eeb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2cd8cc08820cafde88a27de023e929b01619d5d1f2150409292b37eb3c922eeb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2cd8cc08820cafde88a27de023e929b01619d5d1f2150409292b37eb3c922eeb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-123767",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-123767/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-123767",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-123767",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-123767",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c82d479a9b453fea7c2e7fdf9d2d31fcc006abbf27cb30c383b1e0b3284bb16",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33889"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33888"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33887"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33886"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8c82d479a9b4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-123767": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "24e749f99949",
	                        "missing-upgrade-123767"
	                    ],
	                    "NetworkID": "bbc8da70a8486d90beae68d254387b22176ccf774cbd2b4a8aebe7d9d9c46271",
	                    "EndpointID": "7fdcf09cdb323c1f23a38b8dc0ef4932da3dcaa306e0069cd1c6f39dc5ecb299",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-123767 -n missing-upgrade-123767
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-123767 -n missing-upgrade-123767: exit status 6 (475.055738ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 22:00:13.217331 1156587 status.go:415] kubeconfig endpoint: got: 192.168.59.120:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-123767" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-123767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-123767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-123767: (2.221919929s)
--- FAIL: TestMissingContainerUpgrade (138.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (319.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-050274 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-050274 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5m12.593250477s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-050274] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-050274 in cluster pause-050274
	* Pulling base image ...
	* Updating the running docker "pause-050274" container ...
	* Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-050274" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:59:24.861404 1152871 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:59:24.862033 1152871 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:59:24.862058 1152871 out.go:309] Setting ErrFile to fd 2...
	I1002 21:59:24.862066 1152871 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:59:24.862389 1152871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:59:24.862812 1152871 out.go:303] Setting JSON to false
	I1002 21:59:24.864979 1152871 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16912,"bootTime":1696267053,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:59:24.865067 1152871 start.go:138] virtualization:  
	I1002 21:59:24.867958 1152871 out.go:177] * [pause-050274] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:59:24.870627 1152871 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 21:59:24.872728 1152871 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:59:24.870705 1152871 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1002 21:59:24.870761 1152871 notify.go:220] Checking for updates...
	I1002 21:59:24.876417 1152871 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:59:24.878526 1152871 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:59:24.880659 1152871 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:59:24.882666 1152871 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:59:24.885175 1152871 config.go:182] Loaded profile config "pause-050274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:59:24.886249 1152871 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:59:24.938008 1152871 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:59:24.948282 1152871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:59:25.142125 1152871 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1002 21:59:25.153983 1152871 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 21:59:25.142413005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:59:25.154110 1152871 docker.go:294] overlay module found
	I1002 21:59:25.157762 1152871 out.go:177] * Using the docker driver based on existing profile
	I1002 21:59:25.159675 1152871 start.go:298] selected driver: docker
	I1002 21:59:25.159698 1152871 start.go:902] validating driver "docker" against &{Name:pause-050274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-050274 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:59:25.159845 1152871 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:59:25.159953 1152871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:59:25.291442 1152871 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 21:59:25.277972126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:59:25.291841 1152871 cni.go:84] Creating CNI manager for ""
	I1002 21:59:25.291860 1152871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:59:25.291872 1152871 start_flags.go:321] config:
	{Name:pause-050274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-050274 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-p
rovisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:59:25.295349 1152871 out.go:177] * Starting control plane node pause-050274 in cluster pause-050274
	I1002 21:59:25.297336 1152871 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:59:25.299105 1152871 out.go:177] * Pulling base image ...
	I1002 21:59:25.300877 1152871 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:59:25.300934 1152871 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 21:59:25.300947 1152871 cache.go:57] Caching tarball of preloaded images
	I1002 21:59:25.301024 1152871 preload.go:174] Found /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 21:59:25.301037 1152871 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 21:59:25.301181 1152871 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/config.json ...
	I1002 21:59:25.301445 1152871 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 21:59:25.320471 1152871 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 21:59:25.320512 1152871 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 21:59:25.320527 1152871 cache.go:195] Successfully downloaded all kic artifacts
	I1002 21:59:25.320600 1152871 start.go:365] acquiring machines lock for pause-050274: {Name:mka82d70eec701e728163e02ccd89deb9a2bd454 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:59:25.320675 1152871 start.go:369] acquired machines lock for "pause-050274" in 45.964µs
	I1002 21:59:25.320699 1152871 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:59:25.320705 1152871 fix.go:54] fixHost starting: 
	I1002 21:59:25.320973 1152871 cli_runner.go:164] Run: docker container inspect pause-050274 --format={{.State.Status}}
	I1002 21:59:25.345721 1152871 fix.go:102] recreateIfNeeded on pause-050274: state=Running err=<nil>
	W1002 21:59:25.345762 1152871 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 21:59:25.348200 1152871 out.go:177] * Updating the running docker "pause-050274" container ...
	I1002 21:59:25.351111 1152871 machine.go:88] provisioning docker machine ...
	I1002 21:59:25.351143 1152871 ubuntu.go:169] provisioning hostname "pause-050274"
	I1002 21:59:25.351206 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:25.378652 1152871 main.go:141] libmachine: Using SSH client type: native
	I1002 21:59:25.379083 1152871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I1002 21:59:25.379100 1152871 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-050274 && echo "pause-050274" | sudo tee /etc/hostname
	I1002 21:59:25.542334 1152871 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-050274
	
	I1002 21:59:25.542502 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:25.562331 1152871 main.go:141] libmachine: Using SSH client type: native
	I1002 21:59:25.562738 1152871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I1002 21:59:25.562756 1152871 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-050274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-050274/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-050274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:59:25.708346 1152871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:59:25.708376 1152871 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 21:59:25.708401 1152871 ubuntu.go:177] setting up certificates
	I1002 21:59:25.708413 1152871 provision.go:83] configureAuth start
	I1002 21:59:25.708480 1152871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-050274
	I1002 21:59:25.727665 1152871 provision.go:138] copyHostCerts
	I1002 21:59:25.727758 1152871 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 21:59:25.727783 1152871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 21:59:25.727853 1152871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 21:59:25.727949 1152871 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 21:59:25.727954 1152871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 21:59:25.727980 1152871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 21:59:25.728029 1152871 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 21:59:25.728034 1152871 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 21:59:25.728057 1152871 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 21:59:25.728104 1152871 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.pause-050274 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-050274]
	I1002 21:59:26.499410 1152871 provision.go:172] copyRemoteCerts
	I1002 21:59:26.499483 1152871 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:59:26.499534 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:26.518497 1152871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/pause-050274/id_rsa Username:docker}
	I1002 21:59:26.625155 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:59:26.657297 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1002 21:59:26.687370 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:59:26.716942 1152871 provision.go:86] duration metric: configureAuth took 1.008514143s
	I1002 21:59:26.717010 1152871 ubuntu.go:193] setting minikube options for container-runtime
	I1002 21:59:26.717295 1152871 config.go:182] Loaded profile config "pause-050274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:59:26.717406 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:26.736165 1152871 main.go:141] libmachine: Using SSH client type: native
	I1002 21:59:26.736587 1152871 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33880 <nil> <nil>}
	I1002 21:59:26.736608 1152871 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:59:32.231741 1152871 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:59:32.231767 1152871 machine.go:91] provisioned docker machine in 6.880633762s
	I1002 21:59:32.231777 1152871 start.go:300] post-start starting for "pause-050274" (driver="docker")
	I1002 21:59:32.231789 1152871 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:59:32.231851 1152871 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:59:32.231890 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:32.253613 1152871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/pause-050274/id_rsa Username:docker}
	I1002 21:59:32.356161 1152871 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:59:32.360450 1152871 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:59:32.360489 1152871 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 21:59:32.360504 1152871 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 21:59:32.360518 1152871 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 21:59:32.360528 1152871 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 21:59:32.360592 1152871 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 21:59:32.360676 1152871 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 21:59:32.360782 1152871 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:59:32.371623 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:59:32.420381 1152871 start.go:303] post-start completed in 188.587635ms
	I1002 21:59:32.420470 1152871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:59:32.420536 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:32.448280 1152871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/pause-050274/id_rsa Username:docker}
	I1002 21:59:32.606302 1152871 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:59:32.616541 1152871 fix.go:56] fixHost completed within 7.295815201s
	I1002 21:59:32.616563 1152871 start.go:83] releasing machines lock for "pause-050274", held for 7.295874778s
	I1002 21:59:32.616642 1152871 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-050274
	I1002 21:59:32.647097 1152871 ssh_runner.go:195] Run: cat /version.json
	I1002 21:59:32.647164 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:32.647396 1152871 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:59:32.647435 1152871 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-050274
	I1002 21:59:32.682741 1152871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/pause-050274/id_rsa Username:docker}
	I1002 21:59:32.694490 1152871 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33880 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/pause-050274/id_rsa Username:docker}
	I1002 21:59:33.313496 1152871 ssh_runner.go:195] Run: systemctl --version
	I1002 21:59:33.337593 1152871 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:59:33.701339 1152871 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 21:59:33.714455 1152871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:59:33.743489 1152871 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 21:59:33.743576 1152871 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:59:33.764398 1152871 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:59:33.764422 1152871 start.go:469] detecting cgroup driver to use...
	I1002 21:59:33.764459 1152871 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 21:59:33.764515 1152871 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:59:33.783267 1152871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:59:33.802493 1152871 docker.go:197] disabling cri-docker service (if available) ...
	I1002 21:59:33.802611 1152871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:59:33.831119 1152871 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:59:33.878671 1152871 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:59:34.105788 1152871 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:59:34.317828 1152871 docker.go:213] disabling docker service ...
	I1002 21:59:34.317896 1152871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:59:34.334857 1152871 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:59:34.350901 1152871 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:59:34.547113 1152871 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:59:34.854759 1152871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:59:34.877464 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:59:34.904887 1152871 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 21:59:34.904949 1152871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:59:34.940263 1152871 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 21:59:34.940331 1152871 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:59:34.972703 1152871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:59:35.013141 1152871 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:59:35.053939 1152871 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:59:35.089745 1152871 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:59:35.138228 1152871 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:59:35.174671 1152871 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:59:35.794814 1152871 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:59:58.154910 1152871 ssh_runner.go:235] Completed: sudo systemctl restart crio: (22.360062716s)
	I1002 21:59:58.154935 1152871 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:59:58.154989 1152871 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:59:58.159921 1152871 start.go:537] Will wait 60s for crictl version
	I1002 21:59:58.159982 1152871 ssh_runner.go:195] Run: which crictl
	I1002 21:59:58.164619 1152871 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 21:59:58.209398 1152871 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 21:59:58.209486 1152871 ssh_runner.go:195] Run: crio --version
	I1002 21:59:58.255936 1152871 ssh_runner.go:195] Run: crio --version
	I1002 21:59:58.303991 1152871 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 21:59:58.305974 1152871 cli_runner.go:164] Run: docker network inspect pause-050274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:59:58.324536 1152871 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1002 21:59:58.329625 1152871 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:59:58.329697 1152871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:59:58.379758 1152871 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 21:59:58.379779 1152871 crio.go:415] Images already preloaded, skipping extraction
	I1002 21:59:58.379841 1152871 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:59:58.425896 1152871 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 21:59:58.425918 1152871 cache_images.go:84] Images are preloaded, skipping loading
	I1002 21:59:58.425991 1152871 ssh_runner.go:195] Run: crio config
	I1002 21:59:58.503724 1152871 cni.go:84] Creating CNI manager for ""
	I1002 21:59:58.503744 1152871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:59:58.503767 1152871 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 21:59:58.503788 1152871 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-050274 NodeName:pause-050274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:59:58.503940 1152871 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-050274"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:59:58.504010 1152871 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-050274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-050274 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 21:59:58.504076 1152871 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 21:59:58.515309 1152871 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:59:58.515392 1152871 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:59:58.525485 1152871 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1002 21:59:58.547622 1152871 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:59:58.574149 1152871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1002 21:59:58.619832 1152871 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:59:58.636813 1152871 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274 for IP: 192.168.67.2
	I1002 21:59:58.636840 1152871 certs.go:190] acquiring lock for shared ca certs: {Name:mk89a4b04b53a0a6e55cb9c88355018fadb8a1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:59:58.636985 1152871 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key
	I1002 21:59:58.637035 1152871 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key
	I1002 21:59:58.637114 1152871 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/client.key
	I1002 21:59:58.637660 1152871 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/apiserver.key.c7fa3a9e
	I1002 21:59:58.638096 1152871 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/proxy-client.key
	I1002 21:59:58.638237 1152871 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem (1338 bytes)
	W1002 21:59:58.638273 1152871 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732_empty.pem, impossibly tiny 0 bytes
	I1002 21:59:58.638288 1152871 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:59:58.638315 1152871 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:59:58.638346 1152871 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:59:58.638378 1152871 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem (1679 bytes)
	I1002 21:59:58.638427 1152871 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 21:59:58.639141 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 21:59:58.692080 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:59:58.739404 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:59:58.796406 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:59:58.843959 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:59:58.884996 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 21:59:58.914450 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:59:58.943958 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 21:59:58.974052 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem --> /usr/share/ca-certificates/1047732.pem (1338 bytes)
	I1002 21:59:59.005060 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /usr/share/ca-certificates/10477322.pem (1708 bytes)
	I1002 21:59:59.047013 1152871 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:59:59.078226 1152871 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:59:59.100816 1152871 ssh_runner.go:195] Run: openssl version
	I1002 21:59:59.110630 1152871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1047732.pem && ln -fs /usr/share/ca-certificates/1047732.pem /etc/ssl/certs/1047732.pem"
	I1002 21:59:59.130016 1152871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1047732.pem
	I1002 21:59:59.135570 1152871 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 21:59:59.135711 1152871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1047732.pem
	I1002 21:59:59.145063 1152871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1047732.pem /etc/ssl/certs/51391683.0"
	I1002 21:59:59.157400 1152871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10477322.pem && ln -fs /usr/share/ca-certificates/10477322.pem /etc/ssl/certs/10477322.pem"
	I1002 21:59:59.169767 1152871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10477322.pem
	I1002 21:59:59.175077 1152871 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 21:59:59.175188 1152871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10477322.pem
	I1002 21:59:59.184770 1152871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10477322.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:59:59.196205 1152871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:59:59.207928 1152871 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:59:59.213183 1152871 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:59:59.213310 1152871 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:59:59.223917 1152871 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:59:59.234997 1152871 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 21:59:59.240356 1152871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:59:59.249813 1152871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:59:59.259026 1152871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:59:59.268044 1152871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:59:59.278122 1152871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:59:59.287357 1152871 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:59:59.296099 1152871 kubeadm.go:404] StartCluster: {Name:pause-050274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-050274 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:59:59.296218 1152871 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:59:59.296326 1152871 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:59:59.368047 1152871 cri.go:89] found id: "ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 21:59:59.368068 1152871 cri.go:89] found id: "4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 21:59:59.368074 1152871 cri.go:89] found id: "f30faa01ebf74c21310cf18b8ff513087faaf29b8a664c62bd72bbe2944a5d62"
	I1002 21:59:59.368079 1152871 cri.go:89] found id: "930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 21:59:59.368085 1152871 cri.go:89] found id: "f06d65d7749b68e3c7d3c4bb2f1225fdd253ed586b2d53a1e932e5049ad916a3"
	I1002 21:59:59.368090 1152871 cri.go:89] found id: "76c2f3177df247fa443c6e85bdf81055807fae4834be338eaba12cf59d7fd9dd"
	I1002 21:59:59.368095 1152871 cri.go:89] found id: "377b38eb5eef2604269b173550459216ee9acb3b05afac01ec46a11202b79939"
	I1002 21:59:59.368099 1152871 cri.go:89] found id: "e3c52140fe32e8d0f31527ffae92f8413e4351fd2090e6152d941bcad38fea01"
	I1002 21:59:59.368104 1152871 cri.go:89] found id: "50aefa0a98638ef24b016660c6c1a0b2387c1b9492a2748dac6cae667528b5ae"
	I1002 21:59:59.368110 1152871 cri.go:89] found id: "ea378338e6aafa6c27c325d74caad7a065a2345c00ac58fb1f0eaadf5e0ec275"
	I1002 21:59:59.368114 1152871 cri.go:89] found id: "a31d21e215e21c0beea81fdf4349151070e1ad51d9a98013100d0fcb8d13e64c"
	I1002 21:59:59.368118 1152871 cri.go:89] found id: "6a2b821bf035aa0e992e0e3a8889d37d3d09866e2dbf8826a1daa9691c2dbf16"
	I1002 21:59:59.368123 1152871 cri.go:89] found id: "532c3641e5b21fa58ff002fa2d43592f8cf1849f89d33ea5a9de2eb295c0c94f"
	I1002 21:59:59.368164 1152871 cri.go:89] found id: "1be038f47f956d212c303cf036d4382bec1325b531021dee4ff9798815f09611"
	I1002 21:59:59.368169 1152871 cri.go:89] found id: "c51795cac7e8da5bc67fa7b9a53f1ba12c1ca3a1a7b777828872c7a5c8adf93c"
	I1002 21:59:59.368174 1152871 cri.go:89] found id: "0bd578bbc49f1fecff63a31fedea1503a6bb9d2841a54dc0ef1cee9ca8f8a2aa"
	I1002 21:59:59.368187 1152871 cri.go:89] found id: "da84362b7dcf10d12f6e877ba5f70d6a6df7cbb6c071630022ab8d0fae99e178"
	I1002 21:59:59.368194 1152871 cri.go:89] found id: "e22b6735220278369d0f77ee0a024423872e8235e79a44d1224703353e46957c"
	I1002 21:59:59.368203 1152871 cri.go:89] found id: ""
	I1002 21:59:59.368270 1152871 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-050274
helpers_test.go:235: (dbg) docker inspect pause-050274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f",
	        "Created": "2023-10-02T21:58:04.69162597Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1148181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T21:58:05.302377128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/hosts",
	        "LogPath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f-json.log",
	        "Name": "/pause-050274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-050274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-050274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00-init/diff:/var/lib/docker/overlay2/211b77e87812a1edc3010e11f8a4e888a425a4aebe773b65e967cb7beecedbef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-050274",
	                "Source": "/var/lib/docker/volumes/pause-050274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-050274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-050274",
	                "name.minikube.sigs.k8s.io": "pause-050274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "868a09bff40049a86186cd21e35329b3ebb6f9735b13af31d4678253a1fb079e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/868a09bff400",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-050274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cbe09fdff1d2",
	                        "pause-050274"
	                    ],
	                    "NetworkID": "8c24a6fb62556caa93968b9db047d26d1e3c64ab7847dac2444544692be83d8b",
	                    "EndpointID": "a41c97e3d2840809f494ff8520c86fbbae91bcad02737a18c7e268362086c34b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-050274 -n pause-050274
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-050274 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-050274 logs -n 25: (2.196772369s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p test-preload-673079         | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:54 UTC | 02 Oct 23 21:54 UTC |
	| start   | -p test-preload-673079         | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:54 UTC | 02 Oct 23 21:55 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| image   | test-preload-673079 image list | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:55 UTC | 02 Oct 23 21:55 UTC |
	| delete  | -p test-preload-673079         | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:55 UTC | 02 Oct 23 21:55 UTC |
	| start   | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:55 UTC | 02 Oct 23 21:56 UTC |
	|         | --memory=2048 --driver=docker  |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC | 02 Oct 23 21:56 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC | 02 Oct 23 21:57 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC | 02 Oct 23 21:57 UTC |
	| start   | -p insufficient-storage-768004 | insufficient-storage-768004 | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-768004 | insufficient-storage-768004 | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC | 02 Oct 23 21:57 UTC |
	| start   | -p pause-050274 --memory=2048  | pause-050274                | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC | 02 Oct 23 21:59 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-050274                | pause-050274                | jenkins | v1.31.2 | 02 Oct 23 21:59 UTC | 02 Oct 23 22:04 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p missing-upgrade-123767      | missing-upgrade-123767      | jenkins | v1.31.2 | 02 Oct 23 21:59 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-123767      | missing-upgrade-123767      | jenkins | v1.31.2 | 02 Oct 23 22:00 UTC | 02 Oct 23 22:00 UTC |
	| start   | -p kubernetes-upgrade-573624   | kubernetes-upgrade-573624   | jenkins | v1.31.2 | 02 Oct 23 22:00 UTC | 02 Oct 23 22:01 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-573624   | kubernetes-upgrade-573624   | jenkins | v1.31.2 | 02 Oct 23 22:01 UTC | 02 Oct 23 22:01 UTC |
	| start   | -p kubernetes-upgrade-573624   | kubernetes-upgrade-573624   | jenkins | v1.31.2 | 02 Oct 23 22:01 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 22:01:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:01:17.301072 1160029 out.go:296] Setting OutFile to fd 1 ...
	I1002 22:01:17.301289 1160029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:17.301316 1160029 out.go:309] Setting ErrFile to fd 2...
	I1002 22:01:17.301335 1160029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:17.301602 1160029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 22:01:17.301990 1160029 out.go:303] Setting JSON to false
	I1002 22:01:17.303107 1160029 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17025,"bootTime":1696267053,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 22:01:17.303258 1160029 start.go:138] virtualization:  
	I1002 22:01:17.307215 1160029 out.go:177] * [kubernetes-upgrade-573624] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 22:01:17.309400 1160029 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 22:01:17.311149 1160029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:01:17.309487 1160029 notify.go:220] Checking for updates...
	I1002 22:01:17.315627 1160029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:01:17.317584 1160029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 22:01:17.319583 1160029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:01:17.321852 1160029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:01:17.324222 1160029 config.go:182] Loaded profile config "kubernetes-upgrade-573624": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 22:01:17.324749 1160029 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 22:01:17.352396 1160029 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 22:01:17.352496 1160029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:17.442916 1160029 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 22:01:17.433082821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:01:17.443022 1160029 docker.go:294] overlay module found
	I1002 22:01:17.445999 1160029 out.go:177] * Using the docker driver based on existing profile
	I1002 22:01:17.447935 1160029 start.go:298] selected driver: docker
	I1002 22:01:17.447950 1160029 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-573624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-573624 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 22:01:17.448046 1160029 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:01:17.448682 1160029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:17.528211 1160029 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 22:01:17.518778279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:01:17.528548 1160029 cni.go:84] Creating CNI manager for ""
	I1002 22:01:17.528566 1160029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:01:17.528578 1160029 start_flags.go:321] config:
	{Name:kubernetes-upgrade-573624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-573624 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 22:01:17.530857 1160029 out.go:177] * Starting control plane node kubernetes-upgrade-573624 in cluster kubernetes-upgrade-573624
	I1002 22:01:17.532933 1160029 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 22:01:17.534787 1160029 out.go:177] * Pulling base image ...
	I1002 22:01:17.536754 1160029 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 22:01:17.536807 1160029 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 22:01:17.536821 1160029 cache.go:57] Caching tarball of preloaded images
	I1002 22:01:17.536914 1160029 preload.go:174] Found /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:01:17.536927 1160029 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 22:01:17.537032 1160029 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/config.json ...
	I1002 22:01:17.537265 1160029 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 22:01:17.562766 1160029 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 22:01:17.562795 1160029 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 22:01:17.562815 1160029 cache.go:195] Successfully downloaded all kic artifacts
	I1002 22:01:17.562889 1160029 start.go:365] acquiring machines lock for kubernetes-upgrade-573624: {Name:mk1c322b4ea74092c8156e6c24f3801e5e50ca23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:01:17.562954 1160029 start.go:369] acquired machines lock for "kubernetes-upgrade-573624" in 41.96µs
	I1002 22:01:17.562973 1160029 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:01:17.562979 1160029 fix.go:54] fixHost starting: 
	I1002 22:01:17.563281 1160029 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-573624 --format={{.State.Status}}
	I1002 22:01:17.583303 1160029 fix.go:102] recreateIfNeeded on kubernetes-upgrade-573624: state=Stopped err=<nil>
	W1002 22:01:17.583345 1160029 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 22:01:17.585740 1160029 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-573624" ...
	I1002 22:01:15.042054 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:15.042109 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:15.042126 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:17.053232 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:17.053269 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:17.053284 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:19.064083 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:19.064114 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:19.064126 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:17.587882 1160029 cli_runner.go:164] Run: docker start kubernetes-upgrade-573624
	I1002 22:01:17.903942 1160029 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-573624 --format={{.State.Status}}
	I1002 22:01:17.923529 1160029 kic.go:426] container "kubernetes-upgrade-573624" state is running.
	I1002 22:01:17.923912 1160029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-573624
	I1002 22:01:17.948857 1160029 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/config.json ...
	I1002 22:01:17.949084 1160029 machine.go:88] provisioning docker machine ...
	I1002 22:01:17.949104 1160029 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-573624"
	I1002 22:01:17.949153 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:17.971295 1160029 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:17.972097 1160029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33899 <nil> <nil>}
	I1002 22:01:17.972118 1160029 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-573624 && echo "kubernetes-upgrade-573624" | sudo tee /etc/hostname
	I1002 22:01:17.972831 1160029 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:01:21.141551 1160029 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-573624
	
	I1002 22:01:21.141640 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:21.168480 1160029 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:21.168893 1160029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33899 <nil> <nil>}
	I1002 22:01:21.168918 1160029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-573624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-573624/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-573624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:01:21.310974 1160029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:01:21.311009 1160029 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 22:01:21.311058 1160029 ubuntu.go:177] setting up certificates
	I1002 22:01:21.311068 1160029 provision.go:83] configureAuth start
	I1002 22:01:21.311135 1160029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-573624
	I1002 22:01:21.328855 1160029 provision.go:138] copyHostCerts
	I1002 22:01:21.328949 1160029 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 22:01:21.328977 1160029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 22:01:21.329058 1160029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 22:01:21.329167 1160029 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 22:01:21.329178 1160029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 22:01:21.329385 1160029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 22:01:21.329490 1160029 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 22:01:21.329502 1160029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 22:01:21.329537 1160029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 22:01:21.329592 1160029 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-573624 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-573624]
	I1002 22:01:21.822768 1160029 provision.go:172] copyRemoteCerts
	I1002 22:01:21.822837 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:01:21.822880 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:21.844800 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:21.943929 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 22:01:21.972605 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 22:01:22.003447 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:01:22.034448 1160029 provision.go:86] duration metric: configureAuth took 723.364769ms
	I1002 22:01:22.034473 1160029 ubuntu.go:193] setting minikube options for container-runtime
	I1002 22:01:22.034683 1160029 config.go:182] Loaded profile config "kubernetes-upgrade-573624": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 22:01:22.034789 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.053528 1160029 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:22.053977 1160029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33899 <nil> <nil>}
	I1002 22:01:22.054002 1160029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:01:22.397837 1160029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:01:22.397901 1160029 machine.go:91] provisioned docker machine in 4.448806793s
	I1002 22:01:22.397939 1160029 start.go:300] post-start starting for "kubernetes-upgrade-573624" (driver="docker")
	I1002 22:01:22.397980 1160029 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:01:22.398112 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:01:22.398193 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.418981 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.520542 1160029 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:01:22.524898 1160029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:01:22.524948 1160029 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 22:01:22.524960 1160029 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 22:01:22.524972 1160029 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 22:01:22.524987 1160029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 22:01:22.525049 1160029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 22:01:22.525130 1160029 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 22:01:22.525290 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:01:22.536275 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:01:22.564980 1160029 start.go:303] post-start completed in 166.99618ms
	I1002 22:01:22.565065 1160029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:01:22.565108 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.583452 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.679587 1160029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:01:22.685419 1160029 fix.go:56] fixHost completed within 5.122432372s
	I1002 22:01:22.685441 1160029 start.go:83] releasing machines lock for "kubernetes-upgrade-573624", held for 5.12247919s
	I1002 22:01:22.685512 1160029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-573624
	I1002 22:01:22.702921 1160029 ssh_runner.go:195] Run: cat /version.json
	I1002 22:01:22.702980 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.703214 1160029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:01:22.703266 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.734432 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.739577 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.829880 1160029 ssh_runner.go:195] Run: systemctl --version
	I1002 22:01:22.965510 1160029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:01:23.114996 1160029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 22:01:23.120748 1160029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:01:23.131750 1160029 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 22:01:23.131853 1160029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:01:23.142398 1160029 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:01:23.142467 1160029 start.go:469] detecting cgroup driver to use...
	I1002 22:01:23.142521 1160029 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 22:01:23.142596 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:01:23.156397 1160029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:01:23.170196 1160029 docker.go:197] disabling cri-docker service (if available) ...
	I1002 22:01:23.170262 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:01:23.184674 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:01:23.198262 1160029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:01:23.291277 1160029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:01:23.387373 1160029 docker.go:213] disabling docker service ...
	I1002 22:01:23.387489 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:01:23.403114 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:01:23.416826 1160029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:01:23.508133 1160029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:01:23.618156 1160029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:01:23.633648 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:01:23.657857 1160029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 22:01:23.657925 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.671403 1160029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:01:23.671489 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.684594 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.696616 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.709668 1160029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:01:23.720848 1160029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:01:23.731118 1160029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:01:23.741406 1160029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:01:23.832504 1160029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:01:23.951669 1160029 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:01:23.951788 1160029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:01:23.957007 1160029 start.go:537] Will wait 60s for crictl version
	I1002 22:01:23.957068 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:01:23.961747 1160029 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 22:01:24.007762 1160029 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 22:01:24.007858 1160029 ssh_runner.go:195] Run: crio --version
	I1002 22:01:24.054760 1160029 ssh_runner.go:195] Run: crio --version
	I1002 22:01:24.105154 1160029 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 22:01:21.073618 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:21.073652 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:21.073671 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:23.083475 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:23.083507 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:23.083519 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:24.107584 1160029 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-573624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:01:24.127189 1160029 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:01:24.132561 1160029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:01:24.146818 1160029 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 22:01:24.146887 1160029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:01:24.193899 1160029 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 22:01:24.193983 1160029 ssh_runner.go:195] Run: which lz4
	I1002 22:01:24.198589 1160029 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 22:01:24.202945 1160029 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 22:01:24.202978 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (389006849 bytes)
	I1002 22:01:26.298276 1160029 crio.go:444] Took 2.099727 seconds to copy over tarball
	I1002 22:01:26.298347 1160029 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 22:01:25.093166 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:25.093198 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:25.093231 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:27.103646 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:27.103675 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:27.103705 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:01:27.103768 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:01:27.178139 1152871 cri.go:89] found id: "a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:01:27.178158 1152871 cri.go:89] found id: "930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:01:27.178164 1152871 cri.go:89] found id: ""
	I1002 22:01:27.178172 1152871 logs.go:284] 2 containers: [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca]
	I1002 22:01:27.178226 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.184859 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.190133 1152871 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:01:27.190238 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:01:27.252654 1152871 cri.go:89] found id: "4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:01:27.252674 1152871 cri.go:89] found id: ""
	I1002 22:01:27.252682 1152871 logs.go:284] 1 containers: [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959]
	I1002 22:01:27.252737 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.258975 1152871 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:01:27.259046 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:01:27.328041 1152871 cri.go:89] found id: "1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:01:27.328060 1152871 cri.go:89] found id: "1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:01:27.328066 1152871 cri.go:89] found id: ""
	I1002 22:01:27.328073 1152871 logs.go:284] 2 containers: [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794]
	I1002 22:01:27.328132 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.334697 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.341178 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:01:27.341417 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:01:27.414498 1152871 cri.go:89] found id: "ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:01:27.414573 1152871 cri.go:89] found id: ""
	I1002 22:01:27.414596 1152871 logs.go:284] 1 containers: [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601]
	I1002 22:01:27.414689 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.420756 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:01:27.420872 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:01:27.482917 1152871 cri.go:89] found id: "47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:01:27.482983 1152871 cri.go:89] found id: ""
	I1002 22:01:27.483005 1152871 logs.go:284] 1 containers: [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66]
	I1002 22:01:27.483094 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.491225 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:01:27.491339 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:01:27.586165 1152871 cri.go:89] found id: "b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:01:27.586249 1152871 cri.go:89] found id: ""
	I1002 22:01:27.586272 1152871 logs.go:284] 1 containers: [b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968]
	I1002 22:01:27.586359 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.591388 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:01:27.591506 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:01:27.653897 1152871 cri.go:89] found id: "75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:01:27.653973 1152871 cri.go:89] found id: ""
	I1002 22:01:27.654004 1152871 logs.go:284] 1 containers: [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f]
	I1002 22:01:27.654086 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.659314 1152871 logs.go:123] Gathering logs for etcd [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959] ...
	I1002 22:01:27.659384 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:01:27.755477 1152871 logs.go:123] Gathering logs for kindnet [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f] ...
	I1002 22:01:27.755550 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:01:27.826165 1152871 logs.go:123] Gathering logs for container status ...
	I1002 22:01:27.826190 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:01:27.913566 1152871 logs.go:123] Gathering logs for kubelet ...
	I1002 22:01:27.913643 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:01:28.055168 1152871 logs.go:123] Gathering logs for coredns [1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794] ...
	I1002 22:01:28.055246 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:01:28.145661 1152871 logs.go:123] Gathering logs for kube-scheduler [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601] ...
	I1002 22:01:28.145738 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:01:28.243382 1152871 logs.go:123] Gathering logs for kube-controller-manager [b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968] ...
	I1002 22:01:28.243455 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:01:28.300322 1152871 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:01:28.300349 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:01:28.396743 1152871 logs.go:123] Gathering logs for coredns [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60] ...
	I1002 22:01:28.396780 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:01:28.454715 1152871 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:01:28.454746 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:01:28.864597 1160029 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.566218076s)
	I1002 22:01:28.864623 1160029 crio.go:451] Took 2.566322 seconds to extract the tarball
	I1002 22:01:28.864633 1160029 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 22:01:28.913893 1160029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:01:28.967299 1160029 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 22:01:28.967322 1160029 cache_images.go:84] Images are preloaded, skipping loading
	I1002 22:01:28.967409 1160029 ssh_runner.go:195] Run: crio config
	I1002 22:01:29.031840 1160029 cni.go:84] Creating CNI manager for ""
	I1002 22:01:29.031867 1160029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:01:29.031890 1160029 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 22:01:29.031912 1160029 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-573624 NodeName:kubernetes-upgrade-573624 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:01:29.032053 1160029 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-573624"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:01:29.032128 1160029 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-573624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-573624 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 22:01:29.032195 1160029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 22:01:29.043315 1160029 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:01:29.043391 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:01:29.054082 1160029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1002 22:01:29.075530 1160029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:01:29.097542 1160029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1002 22:01:29.119835 1160029 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:01:29.124538 1160029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:01:29.138300 1160029 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624 for IP: 192.168.76.2
	I1002 22:01:29.138332 1160029 certs.go:190] acquiring lock for shared ca certs: {Name:mk89a4b04b53a0a6e55cb9c88355018fadb8a1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:01:29.138469 1160029 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key
	I1002 22:01:29.138517 1160029 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key
	I1002 22:01:29.138594 1160029 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/client.key
	I1002 22:01:29.138667 1160029 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/apiserver.key.31bdca25
	I1002 22:01:29.138712 1160029 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/proxy-client.key
	I1002 22:01:29.138826 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem (1338 bytes)
	W1002 22:01:29.138867 1160029 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732_empty.pem, impossibly tiny 0 bytes
	I1002 22:01:29.138880 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:01:29.138906 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem (1082 bytes)
	I1002 22:01:29.138936 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:01:29.138965 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem (1679 bytes)
	I1002 22:01:29.139015 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:01:29.139731 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 22:01:29.170717 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:01:29.199043 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:01:29.228643 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:01:29.258148 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:01:29.287230 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 22:01:29.315317 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:01:29.344146 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:01:29.373683 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /usr/share/ca-certificates/10477322.pem (1708 bytes)
	I1002 22:01:29.403887 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:01:29.433036 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem --> /usr/share/ca-certificates/1047732.pem (1338 bytes)
	I1002 22:01:29.461573 1160029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:01:29.482960 1160029 ssh_runner.go:195] Run: openssl version
	I1002 22:01:29.490014 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10477322.pem && ln -fs /usr/share/ca-certificates/10477322.pem /etc/ssl/certs/10477322.pem"
	I1002 22:01:29.502638 1160029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10477322.pem
	I1002 22:01:29.507351 1160029 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 22:01:29.507419 1160029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10477322.pem
	I1002 22:01:29.516349 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10477322.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:01:29.527585 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:01:29.539615 1160029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:01:29.544423 1160029 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:01:29.544498 1160029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:01:29.553304 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:01:29.564623 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1047732.pem && ln -fs /usr/share/ca-certificates/1047732.pem /etc/ssl/certs/1047732.pem"
	I1002 22:01:29.576409 1160029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1047732.pem
	I1002 22:01:29.581080 1160029 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 22:01:29.581140 1160029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1047732.pem
	I1002 22:01:29.589759 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1047732.pem /etc/ssl/certs/51391683.0"
	I1002 22:01:29.600441 1160029 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 22:01:29.604807 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:01:29.613117 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:01:29.621990 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:01:29.630417 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:01:29.639154 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:01:29.647779 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:01:29.656437 1160029 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-573624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-573624 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 22:01:29.656531 1160029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:01:29.656592 1160029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:01:29.699807 1160029 cri.go:89] found id: ""
	I1002 22:01:29.699951 1160029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:01:29.711566 1160029 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 22:01:29.711637 1160029 kubeadm.go:636] restartCluster start
	I1002 22:01:29.711748 1160029 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:01:29.722844 1160029 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:01:29.723646 1160029 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-573624" does not appear in /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:01:29.723991 1160029 kubeconfig.go:146] "kubernetes-upgrade-573624" context is missing from /home/jenkins/minikube-integration/17323-1042317/kubeconfig - will repair!
	I1002 22:01:29.724599 1160029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/kubeconfig: {Name:mk6186c13a5b804fd6de8f5697b568acedb59886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:01:29.725731 1160029 kapi.go:59] client config for kubernetes-upgrade-573624: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:01:29.726791 1160029 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:01:29.737421 1160029 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-10-02 22:00:38.206983393 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-10-02 22:01:29.114743692 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-573624"
	   kubeletExtraArgs:
	     node-ip: 192.168.76.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-573624
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.28.2
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1002 22:01:29.737453 1160029 kubeadm.go:1128] stopping kube-system containers ...
	I1002 22:01:29.737465 1160029 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 22:01:29.737522 1160029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:01:29.780270 1160029 cri.go:89] found id: ""
	I1002 22:01:29.780340 1160029 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 22:01:29.794729 1160029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:01:29.806216 1160029 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Oct  2 22:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5739 Oct  2 22:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Oct  2 22:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Oct  2 22:00 /etc/kubernetes/scheduler.conf
	
	I1002 22:01:29.806331 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:01:29.817581 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:01:29.828496 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:01:29.839465 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:01:29.850242 1160029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:01:29.861166 1160029 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 22:01:29.861193 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:29.921282 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.272796 1160029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.351475313s)
	I1002 22:01:31.272831 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.439481 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.529763 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.621746 1160029 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:01:31.621854 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:31.639944 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:32.161567 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:32.661553 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:32.699126 1160029 api_server.go:72] duration metric: took 1.077400004s to wait for apiserver process to appear ...
	I1002 22:01:32.699152 1160029 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:01:32.699170 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:37.699999 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:37.700058 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:42.701286 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:43.202065 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:48.202878 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:48.202927 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:53.203165 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:53.203209 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:53.744340 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:33610->192.168.76.2:8443: read: connection reset by peer
	I1002 22:01:53.744380 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:53.744647 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:01:54.202297 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:54.202700 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:01:54.702337 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:54.702783 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:01:55.202333 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:00.202766 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:00.202823 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:05.203234 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:05.203288 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:10.203519 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:10.203559 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:15.204296 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:15.204340 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.097474 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:45856->192.168.76.2:8443: read: connection reset by peer
	I1002 22:02:16.097512 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.097798 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:16.202094 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.202604 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:16.702366 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.702884 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:17.201391 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:17.201790 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:17.701448 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:17.701848 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:18.202375 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:18.202795 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:18.702343 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:18.702764 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:19.201406 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:19.201818 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:19.701433 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:19.701825 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:20.201662 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:20.202091 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:20.701457 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:20.701869 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:21.201411 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:21.201844 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:21.701404 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:21.701886 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:22.202401 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:22.202791 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:22.702386 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:22.702812 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:23.201449 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:23.201880 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:23.702261 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:23.702659 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:24.202254 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:24.202702 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:24.702361 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:24.702885 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:25.202410 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:25.202880 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:25.702414 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:25.702862 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:26.202362 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:26.202857 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:26.701436 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:26.701778 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:27.202320 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:27.202667 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:28.609083 1152871 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.154303837s)
	W1002 22:02:28.609120 1152871 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1002 22:02:28.609129 1152871 logs.go:123] Gathering logs for kube-apiserver [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c] ...
	I1002 22:02:28.609139 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:02:28.668679 1152871 logs.go:123] Gathering logs for kube-proxy [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66] ...
	I1002 22:02:28.668716 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:02:28.711450 1152871 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:28.711476 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:28.733260 1152871 logs.go:123] Gathering logs for kube-apiserver [930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca] ...
	I1002 22:02:28.733291 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:02:27.701429 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:27.701895 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:28.201433 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:28.201843 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:28.702378 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:28.702780 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:29.201401 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:29.201867 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:29.701417 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:29.701834 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:30.201623 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:30.202050 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:30.701473 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:30.701927 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:31.201563 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:31.201980 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:31.701445 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:31.701811 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:32.202362 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:32.202713 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:31.289328 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:02:31.298641 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:02:31.298672 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:02:31.298699 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:31.298765 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:31.344117 1152871 cri.go:89] found id: "a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:02:31.344143 1152871 cri.go:89] found id: "930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:02:31.344149 1152871 cri.go:89] found id: ""
	I1002 22:02:31.344157 1152871 logs.go:284] 2 containers: [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca]
	I1002 22:02:31.344221 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.349176 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.354197 1152871 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:31.354347 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:31.397684 1152871 cri.go:89] found id: "07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb"
	I1002 22:02:31.397752 1152871 cri.go:89] found id: "4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:02:31.397772 1152871 cri.go:89] found id: ""
	I1002 22:02:31.397796 1152871 logs.go:284] 2 containers: [07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb 4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959]
	I1002 22:02:31.397875 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.402573 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.407234 1152871 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:31.407332 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:31.455394 1152871 cri.go:89] found id: "1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:02:31.455413 1152871 cri.go:89] found id: "1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:02:31.455420 1152871 cri.go:89] found id: ""
	I1002 22:02:31.455427 1152871 logs.go:284] 2 containers: [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794]
	I1002 22:02:31.455486 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.460261 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.464761 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:31.464852 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:31.510583 1152871 cri.go:89] found id: "ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec"
	I1002 22:02:31.510607 1152871 cri.go:89] found id: "ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:02:31.510613 1152871 cri.go:89] found id: ""
	I1002 22:02:31.510620 1152871 logs.go:284] 2 containers: [ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601]
	I1002 22:02:31.510680 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.515463 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.520123 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:31.520193 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:31.569943 1152871 cri.go:89] found id: "47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:02:31.569966 1152871 cri.go:89] found id: ""
	I1002 22:02:31.569975 1152871 logs.go:284] 1 containers: [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66]
	I1002 22:02:31.570035 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.574966 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:31.575039 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:31.625076 1152871 cri.go:89] found id: "8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec"
	I1002 22:02:31.625101 1152871 cri.go:89] found id: "b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:02:31.625107 1152871 cri.go:89] found id: ""
	I1002 22:02:31.625115 1152871 logs.go:284] 2 containers: [8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968]
	I1002 22:02:31.625176 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.629870 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.634481 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:31.634552 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:31.697232 1152871 cri.go:89] found id: "75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:02:31.697253 1152871 cri.go:89] found id: ""
	I1002 22:02:31.697262 1152871 logs.go:284] 1 containers: [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f]
	I1002 22:02:31.697318 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.702242 1152871 logs.go:123] Gathering logs for kube-scheduler [ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec] ...
	I1002 22:02:31.702291 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec"
	I1002 22:02:31.747183 1152871 logs.go:123] Gathering logs for kube-scheduler [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601] ...
	I1002 22:02:31.747208 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:02:31.822455 1152871 logs.go:123] Gathering logs for kindnet [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f] ...
	I1002 22:02:31.822527 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:02:31.865981 1152871 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:31.866008 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:31.949791 1152871 logs.go:123] Gathering logs for kube-apiserver [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c] ...
	I1002 22:02:31.949827 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:02:32.030765 1152871 logs.go:123] Gathering logs for kube-apiserver [930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca] ...
	I1002 22:02:32.030800 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:02:32.079615 1152871 logs.go:123] Gathering logs for etcd [07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb] ...
	I1002 22:02:32.079645 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb"
	I1002 22:02:32.137084 1152871 logs.go:123] Gathering logs for kube-controller-manager [b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968] ...
	I1002 22:02:32.137114 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:02:32.180900 1152871 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:32.180928 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:32.202963 1152871 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:32.202992 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:02:32.702273 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:32.702366 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:32.747144 1160029 cri.go:89] found id: "e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:32.747165 1160029 cri.go:89] found id: ""
	I1002 22:02:32.747173 1160029 logs.go:284] 1 containers: [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]
	I1002 22:02:32.747226 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:32.752076 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:32.752150 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:32.799411 1160029 cri.go:89] found id: ""
	I1002 22:02:32.799437 1160029 logs.go:284] 0 containers: []
	W1002 22:02:32.799447 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:32.799453 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:32.799512 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:32.841151 1160029 cri.go:89] found id: ""
	I1002 22:02:32.841179 1160029 logs.go:284] 0 containers: []
	W1002 22:02:32.841188 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:32.841194 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:32.841275 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:32.883371 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:32.883390 1160029 cri.go:89] found id: ""
	I1002 22:02:32.883399 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:32.883455 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:32.888018 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:32.888089 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:32.929755 1160029 cri.go:89] found id: ""
	I1002 22:02:32.929778 1160029 logs.go:284] 0 containers: []
	W1002 22:02:32.929786 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:32.929792 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:32.929854 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:32.972499 1160029 cri.go:89] found id: "a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:32.972519 1160029 cri.go:89] found id: ""
	I1002 22:02:32.972527 1160029 logs.go:284] 1 containers: [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a]
	I1002 22:02:32.972581 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:32.977180 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:32.977286 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:33.026796 1160029 cri.go:89] found id: ""
	I1002 22:02:33.026818 1160029 logs.go:284] 0 containers: []
	W1002 22:02:33.026828 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:33.026835 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:33.026902 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:33.070729 1160029 cri.go:89] found id: ""
	I1002 22:02:33.070753 1160029 logs.go:284] 0 containers: []
	W1002 22:02:33.070761 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:33.070772 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:33.070784 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:33.092496 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:33.092525 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:33.170625 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:33.170647 1160029 logs.go:123] Gathering logs for kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8] ...
	I1002 22:02:33.170659 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:33.223587 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:33.223620 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:33.311340 1160029 logs.go:123] Gathering logs for kube-controller-manager [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a] ...
	I1002 22:02:33.311374 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:33.359196 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:33.359224 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:33.394548 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:33.394578 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:33.439206 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:33.439235 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:36.014453 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:41.015003 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:41.015056 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:41.015117 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:41.058793 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:41.058813 1160029 cri.go:89] found id: "e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:41.058819 1160029 cri.go:89] found id: ""
	I1002 22:02:41.058826 1160029 logs.go:284] 2 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]
	I1002 22:02:41.058882 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.063438 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.068095 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:41.068166 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:41.108922 1160029 cri.go:89] found id: ""
	I1002 22:02:41.108946 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.108955 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:41.108962 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:41.109024 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:41.151402 1160029 cri.go:89] found id: ""
	I1002 22:02:41.151504 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.151518 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:41.151525 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:41.151613 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:41.195718 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:41.195740 1160029 cri.go:89] found id: ""
	I1002 22:02:41.195748 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:41.195805 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.200327 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:41.200396 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:41.242717 1160029 cri.go:89] found id: ""
	I1002 22:02:41.242739 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.242747 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:41.242755 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:41.242816 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:41.285706 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:41.285728 1160029 cri.go:89] found id: "a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:41.285733 1160029 cri.go:89] found id: ""
	I1002 22:02:41.285741 1160029 logs.go:284] 2 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a]
	I1002 22:02:41.285800 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.290309 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.294926 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:41.295001 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:41.336676 1160029 cri.go:89] found id: ""
	I1002 22:02:41.336699 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.336707 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:41.336714 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:41.336771 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:41.387267 1160029 cri.go:89] found id: ""
	I1002 22:02:41.387331 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.387353 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:41.387385 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:41.387422 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:41.454773 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:41.454810 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:41.475954 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:41.475983 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:02:51.555076 1160029 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.079069792s)
	W1002 22:02:51.555116 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 22:02:51.555125 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:02:51.555135 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:51.601472 1160029 logs.go:123] Gathering logs for kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8] ...
	I1002 22:02:51.601501 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:51.668460 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:02:51.668491 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:51.710692 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:51.710727 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:51.788864 1160029 logs.go:123] Gathering logs for kube-controller-manager [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a] ...
	I1002 22:02:51.788902 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:51.835331 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:51.835357 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:51.878019 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:51.878054 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:54.436276 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:55.510942 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:50108->192.168.76.2:8443: read: connection reset by peer
	I1002 22:02:55.511001 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:55.511066 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:55.575926 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:55.575950 1160029 cri.go:89] found id: "e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:55.575956 1160029 cri.go:89] found id: ""
	I1002 22:02:55.575965 1160029 logs.go:284] 2 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]
	I1002 22:02:55.576023 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.580614 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.584964 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:55.585040 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:55.625866 1160029 cri.go:89] found id: ""
	I1002 22:02:55.625888 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.625897 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:55.625903 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:55.625960 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:55.667979 1160029 cri.go:89] found id: ""
	I1002 22:02:55.668005 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.668014 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:55.668021 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:55.668087 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:55.713876 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:55.713896 1160029 cri.go:89] found id: ""
	I1002 22:02:55.713904 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:55.713959 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.718633 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:55.718706 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:55.760406 1160029 cri.go:89] found id: ""
	I1002 22:02:55.760432 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.760440 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:55.760447 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:55.760504 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:55.805402 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:55.805422 1160029 cri.go:89] found id: "a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:55.805428 1160029 cri.go:89] found id: ""
	I1002 22:02:55.805436 1160029 logs.go:284] 2 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a]
	I1002 22:02:55.805493 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.809917 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.814240 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:55.814311 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:55.879039 1160029 cri.go:89] found id: ""
	I1002 22:02:55.879062 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.879070 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:55.879077 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:55.879133 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:55.925703 1160029 cri.go:89] found id: ""
	I1002 22:02:55.925725 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.925733 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:55.925746 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:55.925758 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:56.007018 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:56.007045 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:02:56.007059 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:56.065577 1160029 logs.go:123] Gathering logs for kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8] ...
	I1002 22:02:56.065609 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	W1002 22:02:56.108911 1160029 logs.go:130] failed kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:02:56.105489    1494 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist" containerID="e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	time="2023-10-02T22:02:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1002 22:02:56.105489    1494 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist" containerID="e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	time="2023-10-02T22:02:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist"
	
	** /stderr **
	I1002 22:02:56.108981 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:56.109022 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:56.188969 1160029 logs.go:123] Gathering logs for kube-controller-manager [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a] ...
	I1002 22:02:56.189003 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:56.242703 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:56.242735 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:56.284719 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:56.284755 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:56.359209 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:56.359245 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:56.381512 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:56.381540 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:56.432098 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:02:56.432126 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:58.977721 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:58.978127 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:58.978174 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:58.978231 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:59.021371 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:59.021394 1160029 cri.go:89] found id: ""
	I1002 22:02:59.021403 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:02:59.021465 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:59.026300 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:59.026378 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:59.070099 1160029 cri.go:89] found id: ""
	I1002 22:02:59.070122 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.070131 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:59.070138 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:59.070206 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:59.112757 1160029 cri.go:89] found id: ""
	I1002 22:02:59.112779 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.112788 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:59.112795 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:59.112854 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:59.163329 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:59.163349 1160029 cri.go:89] found id: ""
	I1002 22:02:59.163358 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:59.163418 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:59.168316 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:59.168409 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:59.214822 1160029 cri.go:89] found id: ""
	I1002 22:02:59.214847 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.214856 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:59.214864 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:59.214927 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:59.257810 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:59.257846 1160029 cri.go:89] found id: ""
	I1002 22:02:59.257854 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:02:59.257911 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:59.262389 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:59.262467 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:59.310187 1160029 cri.go:89] found id: ""
	I1002 22:02:59.310211 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.310219 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:59.310233 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:59.310295 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:59.352789 1160029 cri.go:89] found id: ""
	I1002 22:02:59.352824 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.352833 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:59.352843 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:59.352855 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:59.427976 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:59.428013 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:59.450020 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:59.450048 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:59.529347 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:59.529371 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:02:59.529383 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:59.580628 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:59.580660 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:59.662193 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:02:59.662230 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:59.716307 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:59.716337 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:59.758322 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:59.758357 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:02.318518 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:02.318904 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:02.318955 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:02.319015 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:02.376004 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:02.376029 1160029 cri.go:89] found id: ""
	I1002 22:03:02.376038 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:02.376093 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:02.381736 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:02.381830 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:02.449278 1160029 cri.go:89] found id: ""
	I1002 22:03:02.449312 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.449321 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:02.449328 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:02.449394 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:02.504469 1160029 cri.go:89] found id: ""
	I1002 22:03:02.504498 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.504507 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:02.504517 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:02.504574 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:02.552028 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:02.552049 1160029 cri.go:89] found id: ""
	I1002 22:03:02.552057 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:02.552115 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:02.556727 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:02.556802 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:02.600504 1160029 cri.go:89] found id: ""
	I1002 22:03:02.600525 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.600533 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:02.600539 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:02.600596 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:02.642187 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:02.642212 1160029 cri.go:89] found id: ""
	I1002 22:03:02.642221 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:02.642278 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:02.646793 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:02.646864 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:02.690884 1160029 cri.go:89] found id: ""
	I1002 22:03:02.690965 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.691000 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:02.691048 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:02.691149 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:02.740024 1160029 cri.go:89] found id: ""
	I1002 22:03:02.740051 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.740059 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:02.740068 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:02.740080 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:02.780747 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:02.780781 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:02.846401 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:02.846431 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:02.929960 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:02.929997 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:02.951905 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:02.951933 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:03.047470 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:03.047551 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:03.047625 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:03.098330 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:03.098364 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:03.184312 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:03.184350 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:05.727550 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:05.727963 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:05.728010 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:05.728063 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:05.769757 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:05.769781 1160029 cri.go:89] found id: ""
	I1002 22:03:05.769790 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:05.769885 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:05.774314 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:05.774388 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:05.820319 1160029 cri.go:89] found id: ""
	I1002 22:03:05.820344 1160029 logs.go:284] 0 containers: []
	W1002 22:03:05.820353 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:05.820359 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:05.820417 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:05.863605 1160029 cri.go:89] found id: ""
	I1002 22:03:05.863627 1160029 logs.go:284] 0 containers: []
	W1002 22:03:05.863635 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:05.863641 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:05.863700 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:05.908334 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:05.908403 1160029 cri.go:89] found id: ""
	I1002 22:03:05.908426 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:05.908508 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:05.913046 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:05.913172 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:05.959923 1160029 cri.go:89] found id: ""
	I1002 22:03:05.959947 1160029 logs.go:284] 0 containers: []
	W1002 22:03:05.959954 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:05.959961 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:05.960021 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:06.007960 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:06.008036 1160029 cri.go:89] found id: ""
	I1002 22:03:06.008059 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:06.008156 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:06.013478 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:06.013608 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:06.063502 1160029 cri.go:89] found id: ""
	I1002 22:03:06.063579 1160029 logs.go:284] 0 containers: []
	W1002 22:03:06.063600 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:06.063609 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:06.063675 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:06.107239 1160029 cri.go:89] found id: ""
	I1002 22:03:06.107313 1160029 logs.go:284] 0 containers: []
	W1002 22:03:06.107329 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:06.107340 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:06.107354 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:06.215936 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:06.215976 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:06.264679 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:06.264707 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:06.306333 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:06.306368 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:06.371592 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:06.371618 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:06.458994 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:06.459031 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:06.479722 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:06.479750 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:06.555580 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:06.555671 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:06.555691 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:09.103579 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:09.104048 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:09.104105 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:09.104164 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:09.148452 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:09.148471 1160029 cri.go:89] found id: ""
	I1002 22:03:09.148480 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:09.148545 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:09.153009 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:09.153081 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:09.198118 1160029 cri.go:89] found id: ""
	I1002 22:03:09.198143 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.198151 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:09.198157 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:09.198218 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:09.243593 1160029 cri.go:89] found id: ""
	I1002 22:03:09.243617 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.243626 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:09.243633 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:09.243692 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:09.286247 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:09.286270 1160029 cri.go:89] found id: ""
	I1002 22:03:09.286279 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:09.286335 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:09.290767 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:09.290831 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:09.342522 1160029 cri.go:89] found id: ""
	I1002 22:03:09.342542 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.342550 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:09.342557 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:09.342628 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:09.389420 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:09.389446 1160029 cri.go:89] found id: ""
	I1002 22:03:09.389466 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:09.389526 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:09.394221 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:09.394296 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:09.435443 1160029 cri.go:89] found id: ""
	I1002 22:03:09.435471 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.435480 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:09.435487 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:09.435549 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:09.477321 1160029 cri.go:89] found id: ""
	I1002 22:03:09.477342 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.477350 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:09.477360 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:09.477372 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:09.556629 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:09.556693 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:09.556720 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:09.607409 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:09.607442 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:09.699143 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:09.699180 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:09.745122 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:09.745240 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:09.791175 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:09.791212 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:09.842844 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:09.842872 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:09.922433 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:09.922467 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:12.443917 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:12.444351 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:12.444394 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:12.444450 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:12.488011 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:12.488077 1160029 cri.go:89] found id: ""
	I1002 22:03:12.488093 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:12.488157 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:12.493404 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:12.493475 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:12.540848 1160029 cri.go:89] found id: ""
	I1002 22:03:12.540873 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.540882 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:12.540889 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:12.540950 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:12.585898 1160029 cri.go:89] found id: ""
	I1002 22:03:12.585922 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.585930 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:12.585937 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:12.585998 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:12.627491 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:12.627513 1160029 cri.go:89] found id: ""
	I1002 22:03:12.627521 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:12.627579 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:12.631945 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:12.632013 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:12.674981 1160029 cri.go:89] found id: ""
	I1002 22:03:12.675004 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.675013 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:12.675020 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:12.675085 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:12.718776 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:12.718839 1160029 cri.go:89] found id: ""
	I1002 22:03:12.718861 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:12.718943 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:12.723424 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:12.723517 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:12.767007 1160029 cri.go:89] found id: ""
	I1002 22:03:12.767032 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.767040 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:12.767047 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:12.767141 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:12.809851 1160029 cri.go:89] found id: ""
	I1002 22:03:12.809874 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.809882 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:12.809892 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:12.809905 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:12.897393 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:12.897433 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:12.946867 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:12.946893 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:12.988547 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:12.988582 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:13.060855 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:13.060882 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:13.143364 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:13.143397 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:13.166073 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:13.166115 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:13.265727 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:13.265915 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:13.265937 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:15.831779 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:15.832161 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:15.832203 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:15.832258 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:15.873377 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:15.873399 1160029 cri.go:89] found id: ""
	I1002 22:03:15.873406 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:15.873471 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:15.878081 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:15.878153 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:15.919344 1160029 cri.go:89] found id: ""
	I1002 22:03:15.919365 1160029 logs.go:284] 0 containers: []
	W1002 22:03:15.919375 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:15.919382 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:15.919440 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:15.961803 1160029 cri.go:89] found id: ""
	I1002 22:03:15.961831 1160029 logs.go:284] 0 containers: []
	W1002 22:03:15.961839 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:15.961846 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:15.961908 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:16.019297 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:16.019317 1160029 cri.go:89] found id: ""
	I1002 22:03:16.019325 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:16.019382 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:16.024464 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:16.024540 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:16.077595 1160029 cri.go:89] found id: ""
	I1002 22:03:16.077617 1160029 logs.go:284] 0 containers: []
	W1002 22:03:16.077626 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:16.077632 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:16.077692 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:16.126530 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:16.126549 1160029 cri.go:89] found id: ""
	I1002 22:03:16.126558 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:16.126615 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:16.131447 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:16.131551 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:16.181511 1160029 cri.go:89] found id: ""
	I1002 22:03:16.181535 1160029 logs.go:284] 0 containers: []
	W1002 22:03:16.181543 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:16.181550 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:16.181610 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:16.226997 1160029 cri.go:89] found id: ""
	I1002 22:03:16.227019 1160029 logs.go:284] 0 containers: []
	W1002 22:03:16.227026 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:16.227036 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:16.227049 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:16.312560 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:16.312635 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:16.335913 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:16.336084 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:16.435681 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:16.435741 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:16.435763 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:16.492214 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:16.492244 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:16.581708 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:16.581745 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:16.628395 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:16.628422 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:16.668235 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:16.668268 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:19.232534 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:19.232959 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:19.233055 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:19.233134 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:19.276904 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:19.276930 1160029 cri.go:89] found id: ""
	I1002 22:03:19.276938 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:19.277022 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:19.281881 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:19.281961 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:19.338966 1160029 cri.go:89] found id: ""
	I1002 22:03:19.338989 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.338998 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:19.339004 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:19.339089 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:19.384663 1160029 cri.go:89] found id: ""
	I1002 22:03:19.384685 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.384694 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:19.384701 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:19.384759 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:19.430728 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:19.430749 1160029 cri.go:89] found id: ""
	I1002 22:03:19.430757 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:19.430818 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:19.435608 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:19.435694 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:19.478395 1160029 cri.go:89] found id: ""
	I1002 22:03:19.478419 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.478427 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:19.478434 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:19.478492 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:19.525986 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:19.526006 1160029 cri.go:89] found id: ""
	I1002 22:03:19.526014 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:19.526073 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:19.530801 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:19.530878 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:19.580353 1160029 cri.go:89] found id: ""
	I1002 22:03:19.580378 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.580388 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:19.580394 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:19.580455 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:19.634138 1160029 cri.go:89] found id: ""
	I1002 22:03:19.634162 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.634172 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:19.634181 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:19.634194 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:19.722176 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:19.722214 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:19.746252 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:19.746287 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:19.827587 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:19.827610 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:19.827624 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:19.881152 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:19.881182 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:19.966232 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:19.966272 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:20.022456 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:20.022487 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:20.064541 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:20.064577 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:22.617974 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:22.618399 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:22.618453 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:22.618510 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:22.664330 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:22.664353 1160029 cri.go:89] found id: ""
	I1002 22:03:22.664361 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:22.664425 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:22.669546 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:22.669619 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:22.718600 1160029 cri.go:89] found id: ""
	I1002 22:03:22.718621 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.718630 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:22.718636 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:22.718694 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:22.762212 1160029 cri.go:89] found id: ""
	I1002 22:03:22.762234 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.762242 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:22.762250 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:22.762319 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:22.809833 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:22.809856 1160029 cri.go:89] found id: ""
	I1002 22:03:22.809864 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:22.809921 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:22.814532 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:22.814651 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:22.862158 1160029 cri.go:89] found id: ""
	I1002 22:03:22.862234 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.862256 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:22.862280 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:22.862364 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:22.916691 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:22.916751 1160029 cri.go:89] found id: ""
	I1002 22:03:22.916773 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:22.916850 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:22.921824 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:22.921942 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:22.963132 1160029 cri.go:89] found id: ""
	I1002 22:03:22.963196 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.963218 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:22.963232 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:22.963306 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:23.012720 1160029 cri.go:89] found id: ""
	I1002 22:03:23.012797 1160029 logs.go:284] 0 containers: []
	W1002 22:03:23.012821 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:23.012861 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:23.012893 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:23.034054 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:23.034085 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:23.119233 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:23.119270 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:23.119282 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:23.165862 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:23.165891 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:23.252596 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:23.252634 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:23.318705 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:23.318775 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:23.361931 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:23.362018 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:23.425339 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:23.425370 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:26.019592 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:27.188989 1152871 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (54.985964181s)
	I1002 22:03:27.194824 1152871 logs.go:123] Gathering logs for etcd [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959] ...
	I1002 22:03:27.194929 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:03:27.284201 1152871 logs.go:123] Gathering logs for coredns [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60] ...
	I1002 22:03:27.284326 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:03:27.375529 1152871 logs.go:123] Gathering logs for container status ...
	I1002 22:03:27.375637 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:27.474199 1152871 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:27.474296 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:27.706712 1152871 logs.go:123] Gathering logs for coredns [1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794] ...
	I1002 22:03:27.706817 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:03:27.809941 1152871 logs.go:123] Gathering logs for kube-proxy [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66] ...
	I1002 22:03:27.809974 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:03:27.981792 1152871 logs.go:123] Gathering logs for kube-controller-manager [8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec] ...
	I1002 22:03:27.981887 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec"
	I1002 22:03:30.618338 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:03:30.627191 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 22:03:30.642455 1152871 api_server.go:141] control plane version: v1.28.2
	I1002 22:03:30.642500 1152871 api_server.go:131] duration metric: took 3m4.381557151s to wait for apiserver health ...
	I1002 22:03:30.642511 1152871 cni.go:84] Creating CNI manager for ""
	I1002 22:03:30.642518 1152871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:03:30.644638 1152871 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 22:03:31.019894 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:03:31.019946 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:31.020013 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:31.082766 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:31.082805 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:31.082811 1160029 cri.go:89] found id: ""
	I1002 22:03:31.082819 1160029 logs.go:284] 2 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:31.082875 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.088254 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.093420 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:31.093490 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:31.150297 1160029 cri.go:89] found id: ""
	I1002 22:03:31.150318 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.150326 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:31.150332 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:31.150390 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:31.208396 1160029 cri.go:89] found id: ""
	I1002 22:03:31.208417 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.208425 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:31.208432 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:31.208490 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:31.272890 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:31.272908 1160029 cri.go:89] found id: ""
	I1002 22:03:31.272916 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:31.272975 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.278087 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:31.278156 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:31.339661 1160029 cri.go:89] found id: ""
	I1002 22:03:31.339682 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.339690 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:31.339697 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:31.339754 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:31.411914 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:31.411934 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:31.411939 1160029 cri.go:89] found id: ""
	I1002 22:03:31.411947 1160029 logs.go:284] 2 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:31.412011 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.417396 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.422271 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:31.422336 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:31.477753 1160029 cri.go:89] found id: ""
	I1002 22:03:31.477775 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.477783 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:31.477793 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:31.477863 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:31.537959 1160029 cri.go:89] found id: ""
	I1002 22:03:31.537980 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.537994 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:31.538009 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:31.538022 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:03:30.646855 1152871 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:03:30.652347 1152871 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 22:03:30.652369 1152871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 22:03:30.674209 1152871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:03:41.628403 1160029 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.090357852s)
	W1002 22:03:41.628446 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 22:03:41.628455 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:41.628467 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:41.693793 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:41.693879 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:41.788206 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:41.788247 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:41.837235 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:41.837262 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:41.903821 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:41.903851 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:41.992775 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:41.992811 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:42.030778 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:42.030862 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:42.093641 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:42.093732 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:42.148134 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:42.148162 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:44.697142 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:47.133012 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:46326->192.168.76.2:8443: read: connection reset by peer
	I1002 22:03:47.133065 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:47.133142 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:47.184804 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:47.184824 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:47.184830 1160029 cri.go:89] found id: ""
	I1002 22:03:47.184838 1160029 logs.go:284] 2 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:47.184892 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.189562 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.193804 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:47.193879 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:47.237882 1160029 cri.go:89] found id: ""
	I1002 22:03:47.237905 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.237914 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:47.237921 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:47.237984 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:47.283549 1160029 cri.go:89] found id: ""
	I1002 22:03:47.283572 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.283581 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:47.283588 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:47.283649 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:45.268101 1152871 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (14.593827569s)
	I1002 22:03:45.268152 1152871 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:03:45.283919 1152871 system_pods.go:59] 8 kube-system pods found
	I1002 22:03:45.283962 1152871 system_pods.go:61] "coredns-5dd5756b68-cm5nm" [18849f27-d4fc-44c4-b9e0-ec7b818e9c76] Running
	I1002 22:03:45.283969 1152871 system_pods.go:61] "coredns-5dd5756b68-t6nc4" [75d35777-c673-4733-aa3b-957c2358719b] Running
	I1002 22:03:45.283975 1152871 system_pods.go:61] "etcd-pause-050274" [cbf7d6f7-1d04-4d76-98b0-76204d0bd925] Running
	I1002 22:03:45.284018 1152871 system_pods.go:61] "kindnet-ztnzr" [ececf515-ef4b-4b91-9456-6530f0dcf4c0] Running
	I1002 22:03:45.284025 1152871 system_pods.go:61] "kube-apiserver-pause-050274" [7d042ae0-0418-4e40-b874-e2fffa8e7786] Running
	I1002 22:03:45.284036 1152871 system_pods.go:61] "kube-controller-manager-pause-050274" [928688d0-f5bf-421a-b0d7-c3069a59ebb2] Running
	I1002 22:03:45.284041 1152871 system_pods.go:61] "kube-proxy-pqzpr" [434448cf-f6fd-45df-a10e-be64371b993e] Running
	I1002 22:03:45.284050 1152871 system_pods.go:61] "kube-scheduler-pause-050274" [22f7c3fc-10e8-4a56-8317-050abd85895d] Running
	I1002 22:03:45.284056 1152871 system_pods.go:74] duration metric: took 15.896255ms to wait for pod list to return data ...
	I1002 22:03:45.284079 1152871 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:03:45.288036 1152871 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:03:45.288073 1152871 node_conditions.go:123] node cpu capacity is 2
	I1002 22:03:45.288087 1152871 node_conditions.go:105] duration metric: took 4.000408ms to run NodePressure ...
	I1002 22:03:45.288128 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:03:45.539618 1152871 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 22:03:45.544743 1152871 retry.go:31] will retry after 155.848612ms: kubelet not initialised
	I1002 22:03:45.706307 1152871 retry.go:31] will retry after 547.400392ms: kubelet not initialised
	I1002 22:03:46.260451 1152871 retry.go:31] will retry after 612.220756ms: kubelet not initialised
	I1002 22:03:46.879309 1152871 retry.go:31] will retry after 1.197216323s: kubelet not initialised
	I1002 22:03:48.087011 1152871 retry.go:31] will retry after 1.520294818s: kubelet not initialised
	I1002 22:03:49.613680 1152871 retry.go:31] will retry after 2.067754829s: kubelet not initialised
	I1002 22:03:47.326587 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:47.326610 1160029 cri.go:89] found id: ""
	I1002 22:03:47.326619 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:47.326675 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.332278 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:47.332353 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:47.375762 1160029 cri.go:89] found id: ""
	I1002 22:03:47.375783 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.375791 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:47.375798 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:47.375854 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:47.419014 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:47.419034 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:47.419040 1160029 cri.go:89] found id: ""
	I1002 22:03:47.419048 1160029 logs.go:284] 2 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:47.419102 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.423907 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.428154 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:47.428224 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:47.477597 1160029 cri.go:89] found id: ""
	I1002 22:03:47.477619 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.477627 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:47.477634 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:47.477697 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:47.522112 1160029 cri.go:89] found id: ""
	I1002 22:03:47.522148 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.522157 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:47.522170 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:47.522187 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:47.571497 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:47.571532 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:47.639710 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:47.639741 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:47.713903 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:47.713926 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:47.713940 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:47.803697 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:47.803747 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:47.855030 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:47.855059 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:47.900506 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:47.900536 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:47.946964 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:47.947001 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:48.015925 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:48.015957 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:48.111318 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:48.111353 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:50.635629 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:50.636075 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:50.636144 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:50.636221 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:50.686873 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:50.686895 1160029 cri.go:89] found id: ""
	I1002 22:03:50.686904 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:03:50.686961 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:50.691629 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:50.691701 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:50.734477 1160029 cri.go:89] found id: ""
	I1002 22:03:50.734503 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.734512 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:50.734519 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:50.734587 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:50.776499 1160029 cri.go:89] found id: ""
	I1002 22:03:50.776527 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.776536 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:50.776543 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:50.776604 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:50.823031 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:50.823056 1160029 cri.go:89] found id: ""
	I1002 22:03:50.823064 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:50.823120 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:50.827608 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:50.827677 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:50.870861 1160029 cri.go:89] found id: ""
	I1002 22:03:50.870883 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.870891 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:50.870897 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:50.870957 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:50.913624 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:50.913646 1160029 cri.go:89] found id: ""
	I1002 22:03:50.913655 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:03:50.913713 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:50.918305 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:50.918374 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:50.962684 1160029 cri.go:89] found id: ""
	I1002 22:03:50.962707 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.962715 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:50.962722 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:50.962780 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:51.012692 1160029 cri.go:89] found id: ""
	I1002 22:03:51.012722 1160029 logs.go:284] 0 containers: []
	W1002 22:03:51.012731 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:51.012741 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:51.012754 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:51.111533 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:51.111570 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:51.133954 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:51.133986 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:51.209135 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:51.209156 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:51.209169 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:51.281338 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:51.281366 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:51.396997 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:51.397033 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:51.442241 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:51.442268 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:51.489786 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:51.489826 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:51.687895 1152871 retry.go:31] will retry after 3.545961405s: kubelet not initialised
	I1002 22:03:54.045844 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:54.046303 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:54.046359 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:54.046421 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:54.090932 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:54.090958 1160029 cri.go:89] found id: ""
	I1002 22:03:54.090967 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:03:54.091026 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:54.096357 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:54.096431 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:54.142507 1160029 cri.go:89] found id: ""
	I1002 22:03:54.142531 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.142539 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:54.142546 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:54.142611 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:54.187424 1160029 cri.go:89] found id: ""
	I1002 22:03:54.187445 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.187454 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:54.187461 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:54.187522 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:54.229971 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:54.229992 1160029 cri.go:89] found id: ""
	I1002 22:03:54.230001 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:54.230057 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:54.235809 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:54.235891 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:54.279621 1160029 cri.go:89] found id: ""
	I1002 22:03:54.279643 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.279652 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:54.279658 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:54.279718 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:54.326775 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:54.326796 1160029 cri.go:89] found id: ""
	I1002 22:03:54.326805 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:03:54.326868 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:54.331502 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:54.331588 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:54.380370 1160029 cri.go:89] found id: ""
	I1002 22:03:54.380391 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.380399 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:54.380405 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:54.380461 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:54.424961 1160029 cri.go:89] found id: ""
	I1002 22:03:54.425033 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.425049 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:54.425060 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:54.425072 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:54.450635 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:54.450661 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:54.532991 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:54.533011 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:54.533027 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:54.585650 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:54.585680 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:54.680115 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:54.680149 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:54.726722 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:54.726750 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:54.771800 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:54.771833 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:54.823860 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:54.823892 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:55.239869 1152871 retry.go:31] will retry after 6.03497621s: kubelet not initialised
	I1002 22:03:57.425743 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:57.426253 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:57.426305 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:57.426372 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:57.472723 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:57.472749 1160029 cri.go:89] found id: ""
	I1002 22:03:57.472758 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:03:57.472824 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:57.477768 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:57.477838 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:57.524289 1160029 cri.go:89] found id: ""
	I1002 22:03:57.524316 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.524346 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:57.524357 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:57.524428 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:57.568739 1160029 cri.go:89] found id: ""
	I1002 22:03:57.568760 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.568768 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:57.568776 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:57.568834 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:57.615328 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:57.615349 1160029 cri.go:89] found id: ""
	I1002 22:03:57.615357 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:57.615413 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:57.620440 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:57.620516 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:57.668585 1160029 cri.go:89] found id: ""
	I1002 22:03:57.668606 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.668614 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:57.668626 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:57.668685 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:57.712177 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:57.712207 1160029 cri.go:89] found id: ""
	I1002 22:03:57.712220 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:03:57.712295 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:57.716907 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:57.716981 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:57.759224 1160029 cri.go:89] found id: ""
	I1002 22:03:57.759248 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.759256 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:57.759263 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:57.759321 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:57.808285 1160029 cri.go:89] found id: ""
	I1002 22:03:57.808311 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.808320 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:57.808330 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:57.808343 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:57.853564 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:57.853591 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:57.902454 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:57.902487 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:57.953289 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:57.953316 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:58.057953 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:58.057990 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:58.080225 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:58.080253 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:58.167505 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:58.167589 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:58.167612 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:58.218639 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:58.218672 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:00.854468 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:00.854981 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:00.855028 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:00.855099 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:00.899290 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:00.899314 1160029 cri.go:89] found id: ""
	I1002 22:04:00.899323 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:00.899394 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:00.904164 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:00.904263 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:00.949598 1160029 cri.go:89] found id: ""
	I1002 22:04:00.949621 1160029 logs.go:284] 0 containers: []
	W1002 22:04:00.949630 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:00.949636 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:00.949710 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:00.995627 1160029 cri.go:89] found id: ""
	I1002 22:04:00.995655 1160029 logs.go:284] 0 containers: []
	W1002 22:04:00.995664 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:00.995671 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:00.995730 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:01.043414 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:01.043436 1160029 cri.go:89] found id: ""
	I1002 22:04:01.043445 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:01.043503 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:01.048244 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:01.048319 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:01.094544 1160029 cri.go:89] found id: ""
	I1002 22:04:01.094633 1160029 logs.go:284] 0 containers: []
	W1002 22:04:01.094657 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:01.094670 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:01.094757 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:01.143846 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:01.143921 1160029 cri.go:89] found id: ""
	I1002 22:04:01.143962 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:01.144041 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:01.149241 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:01.149318 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:01.202316 1160029 cri.go:89] found id: ""
	I1002 22:04:01.202365 1160029 logs.go:284] 0 containers: []
	W1002 22:04:01.202377 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:01.202384 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:01.202464 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:01.250206 1160029 cri.go:89] found id: ""
	I1002 22:04:01.250241 1160029 logs.go:284] 0 containers: []
	W1002 22:04:01.250251 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:01.250262 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:01.250275 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:01.354668 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:01.354699 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:01.376069 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:01.376101 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:01.460001 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:01.460034 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:01.460046 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:01.526093 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:01.526127 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:01.627493 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:01.627545 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:01.679757 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:01.679786 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:01.730181 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:01.730215 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:01.280477 1152871 retry.go:31] will retry after 9.468766097s: kubelet not initialised
	I1002 22:04:04.289888 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:04.290287 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:04.290330 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:04.290403 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:04.334127 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:04.334147 1160029 cri.go:89] found id: ""
	I1002 22:04:04.334156 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:04.334210 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:04.338814 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:04.338898 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:04.380939 1160029 cri.go:89] found id: ""
	I1002 22:04:04.380968 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.380980 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:04.380995 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:04.381076 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:04.425955 1160029 cri.go:89] found id: ""
	I1002 22:04:04.425980 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.425994 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:04.426002 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:04.426060 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:04.473948 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:04.473969 1160029 cri.go:89] found id: ""
	I1002 22:04:04.473977 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:04.474033 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:04.478317 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:04.478390 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:04.521738 1160029 cri.go:89] found id: ""
	I1002 22:04:04.521809 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.521831 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:04.521853 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:04.521992 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:04.567461 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:04.567482 1160029 cri.go:89] found id: ""
	I1002 22:04:04.567490 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:04.567564 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:04.572754 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:04.572841 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:04.617527 1160029 cri.go:89] found id: ""
	I1002 22:04:04.617560 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.617570 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:04.617576 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:04.617645 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:04.660214 1160029 cri.go:89] found id: ""
	I1002 22:04:04.660241 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.660249 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:04.660259 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:04.660274 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:04.773307 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:04.773342 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:04.820145 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:04.820174 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:04.867703 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:04.867736 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:04.929458 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:04.929485 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:05.034976 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:05.035017 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:05.057328 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:05.057359 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:05.137147 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:05.137168 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:05.137183 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:07.700017 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:07.700421 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:07.700475 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:07.700539 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:07.742898 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:07.742918 1160029 cri.go:89] found id: ""
	I1002 22:04:07.742927 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:07.742983 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:07.747593 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:07.747663 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:07.794300 1160029 cri.go:89] found id: ""
	I1002 22:04:07.794322 1160029 logs.go:284] 0 containers: []
	W1002 22:04:07.794330 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:07.794336 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:07.794394 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:07.835326 1160029 cri.go:89] found id: ""
	I1002 22:04:07.835354 1160029 logs.go:284] 0 containers: []
	W1002 22:04:07.835363 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:07.835370 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:07.835431 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:07.879004 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:07.879030 1160029 cri.go:89] found id: ""
	I1002 22:04:07.879039 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:07.879094 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:07.883476 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:07.883544 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:07.924164 1160029 cri.go:89] found id: ""
	I1002 22:04:07.924190 1160029 logs.go:284] 0 containers: []
	W1002 22:04:07.924198 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:07.924204 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:07.924259 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:07.967096 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:07.967116 1160029 cri.go:89] found id: ""
	I1002 22:04:07.967124 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:07.967178 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:07.971629 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:07.971695 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:08.020843 1160029 cri.go:89] found id: ""
	I1002 22:04:08.020866 1160029 logs.go:284] 0 containers: []
	W1002 22:04:08.020874 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:08.020881 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:08.020943 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:08.071242 1160029 cri.go:89] found id: ""
	I1002 22:04:08.071268 1160029 logs.go:284] 0 containers: []
	W1002 22:04:08.071289 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:08.071300 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:08.071316 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:08.183478 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:08.183556 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:08.230185 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:08.230219 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:08.278947 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:08.278982 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:08.326401 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:08.326429 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:08.436602 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:08.436644 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:08.458743 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:08.458774 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:08.538090 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:08.538166 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:08.538189 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:11.092147 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:11.092549 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:11.092599 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:11.092656 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:11.138396 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:11.138419 1160029 cri.go:89] found id: ""
	I1002 22:04:11.138429 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:11.138492 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:11.143105 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:11.143176 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:11.191124 1160029 cri.go:89] found id: ""
	I1002 22:04:11.191146 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.191155 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:11.191161 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:11.191221 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:11.238479 1160029 cri.go:89] found id: ""
	I1002 22:04:11.238502 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.238511 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:11.238517 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:11.238582 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:11.290364 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:11.290384 1160029 cri.go:89] found id: ""
	I1002 22:04:11.290392 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:11.290453 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:11.295107 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:11.295181 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:11.338167 1160029 cri.go:89] found id: ""
	I1002 22:04:11.338189 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.338197 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:11.338204 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:11.338273 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:11.385641 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:11.385663 1160029 cri.go:89] found id: ""
	I1002 22:04:11.385671 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:11.385733 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:11.390692 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:11.390763 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:11.436492 1160029 cri.go:89] found id: ""
	I1002 22:04:11.436517 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.436525 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:11.436532 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:11.436590 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:11.478168 1160029 cri.go:89] found id: ""
	I1002 22:04:11.478192 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.478201 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:11.478210 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:11.478223 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:11.499582 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:11.499609 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:11.584985 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:11.585007 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:11.585020 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:11.635164 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:11.635198 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:11.740656 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:11.740694 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:11.785401 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:11.785430 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:11.830493 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:11.830530 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:11.881827 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:11.881863 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:10.754626 1152871 retry.go:31] will retry after 13.418516702s: kubelet not initialised
	I1002 22:04:14.493731 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:14.494142 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:14.494187 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:14.494240 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:14.543876 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:14.544232 1160029 cri.go:89] found id: ""
	I1002 22:04:14.544247 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:14.544324 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:14.548931 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:14.549001 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:14.591335 1160029 cri.go:89] found id: ""
	I1002 22:04:14.591402 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.591424 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:14.591439 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:14.591500 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:14.632781 1160029 cri.go:89] found id: ""
	I1002 22:04:14.632804 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.632812 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:14.632819 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:14.632876 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:14.676189 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:14.676212 1160029 cri.go:89] found id: ""
	I1002 22:04:14.676221 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:14.676277 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:14.681167 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:14.681265 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:14.726630 1160029 cri.go:89] found id: ""
	I1002 22:04:14.726655 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.726665 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:14.726672 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:14.726768 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:14.775998 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:14.776020 1160029 cri.go:89] found id: ""
	I1002 22:04:14.776028 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:14.776086 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:14.781008 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:14.781134 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:14.826140 1160029 cri.go:89] found id: ""
	I1002 22:04:14.826164 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.826172 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:14.826179 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:14.826265 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:14.871429 1160029 cri.go:89] found id: ""
	I1002 22:04:14.871497 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.871520 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:14.871536 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:14.871549 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:14.920304 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:14.920334 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:15.013742 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:15.013789 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:15.076216 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:15.076246 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:15.124476 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:15.124511 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:15.178593 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:15.178622 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:15.290368 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:15.290404 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:15.311963 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:15.311992 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:15.384667 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:17.885630 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:17.886064 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:17.886122 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:17.886187 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:17.930138 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:17.930160 1160029 cri.go:89] found id: ""
	I1002 22:04:17.930171 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:17.930227 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:17.934815 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:17.934923 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:17.979281 1160029 cri.go:89] found id: ""
	I1002 22:04:17.979355 1160029 logs.go:284] 0 containers: []
	W1002 22:04:17.979384 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:17.979399 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:17.979485 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:18.023989 1160029 cri.go:89] found id: ""
	I1002 22:04:18.024079 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.024107 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:18.024120 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:18.024206 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:18.072842 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:18.072919 1160029 cri.go:89] found id: ""
	I1002 22:04:18.072941 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:18.073032 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:18.078244 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:18.078361 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:18.128570 1160029 cri.go:89] found id: ""
	I1002 22:04:18.128598 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.128606 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:18.128613 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:18.128676 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:18.174783 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:18.174856 1160029 cri.go:89] found id: ""
	I1002 22:04:18.174879 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:18.174957 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:18.180222 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:18.180346 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:18.226429 1160029 cri.go:89] found id: ""
	I1002 22:04:18.226456 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.226475 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:18.226484 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:18.226555 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:18.271652 1160029 cri.go:89] found id: ""
	I1002 22:04:18.271728 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.271742 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:18.271753 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:18.271767 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:18.318377 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:18.318405 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:18.415723 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:18.415761 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:18.462191 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:18.462221 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:18.509075 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:18.509108 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:18.564223 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:18.564249 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:18.680278 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:18.680314 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:18.702505 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:18.702538 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:18.782134 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:21.282529 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:21.282935 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:21.282978 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:21.283031 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:21.326756 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:21.326779 1160029 cri.go:89] found id: ""
	I1002 22:04:21.326788 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:21.326844 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:21.331359 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:21.331427 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:21.374259 1160029 cri.go:89] found id: ""
	I1002 22:04:21.374282 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.374290 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:21.374297 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:21.374353 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:21.415220 1160029 cri.go:89] found id: ""
	I1002 22:04:21.415241 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.415250 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:21.415256 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:21.415313 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:21.458531 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:21.458551 1160029 cri.go:89] found id: ""
	I1002 22:04:21.458560 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:21.458616 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:21.463215 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:21.463289 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:21.514768 1160029 cri.go:89] found id: ""
	I1002 22:04:21.514790 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.514799 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:21.514805 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:21.514864 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:21.556699 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:21.556720 1160029 cri.go:89] found id: ""
	I1002 22:04:21.556728 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:21.556785 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:21.561715 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:21.561784 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:21.603910 1160029 cri.go:89] found id: ""
	I1002 22:04:21.603975 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.603989 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:21.603996 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:21.604059 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:21.649738 1160029 cri.go:89] found id: ""
	I1002 22:04:21.649761 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.649769 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:21.649779 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:21.649794 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:21.695647 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:21.695680 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:21.744694 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:21.744720 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:21.858676 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:21.858711 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:21.881304 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:21.881336 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:21.960590 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:21.960668 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:21.960712 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:22.007588 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:22.007624 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:22.109950 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:22.109991 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:24.181723 1152871 retry.go:31] will retry after 7.765021344s: kubelet not initialised
	I1002 22:04:24.662592 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:24.663050 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:24.663106 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:24.663164 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:24.710564 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:24.710641 1160029 cri.go:89] found id: ""
	I1002 22:04:24.710665 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:24.710753 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:24.716172 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:24.716260 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:24.762119 1160029 cri.go:89] found id: ""
	I1002 22:04:24.762140 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.762149 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:24.762155 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:24.762216 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:24.804776 1160029 cri.go:89] found id: ""
	I1002 22:04:24.804799 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.804807 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:24.804814 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:24.804871 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:24.847302 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:24.847327 1160029 cri.go:89] found id: ""
	I1002 22:04:24.847335 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:24.847391 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:24.852099 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:24.852189 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:24.898495 1160029 cri.go:89] found id: ""
	I1002 22:04:24.898568 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.898584 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:24.898592 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:24.898654 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:24.949603 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:24.949625 1160029 cri.go:89] found id: ""
	I1002 22:04:24.949633 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:24.949689 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:24.954186 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:24.954258 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:24.995299 1160029 cri.go:89] found id: ""
	I1002 22:04:24.995366 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.995379 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:24.995387 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:24.995447 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:25.042471 1160029 cri.go:89] found id: ""
	I1002 22:04:25.042539 1160029 logs.go:284] 0 containers: []
	W1002 22:04:25.042554 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:25.042564 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:25.042577 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:25.093890 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:25.093928 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:25.142064 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:25.142093 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:25.258034 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:25.258070 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:25.279855 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:25.279885 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:25.356782 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:25.356804 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:25.356816 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:25.407084 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:25.407115 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:25.506794 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:25.506828 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:28.050802 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:28.051272 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:28.051338 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:28.051407 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:28.096403 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:28.096423 1160029 cri.go:89] found id: ""
	I1002 22:04:28.096431 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:28.096487 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:28.101277 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:28.101350 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:28.143442 1160029 cri.go:89] found id: ""
	I1002 22:04:28.143469 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.143477 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:28.143483 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:28.143544 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:28.201954 1160029 cri.go:89] found id: ""
	I1002 22:04:28.201977 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.201985 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:28.201991 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:28.202050 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:28.257907 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:28.257971 1160029 cri.go:89] found id: ""
	I1002 22:04:28.258000 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:28.258072 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:28.263205 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:28.263279 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:28.317042 1160029 cri.go:89] found id: ""
	I1002 22:04:28.317066 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.317075 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:28.317081 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:28.317155 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:28.362660 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:28.362681 1160029 cri.go:89] found id: ""
	I1002 22:04:28.362690 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:28.362745 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:28.367234 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:28.367301 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:28.420348 1160029 cri.go:89] found id: ""
	I1002 22:04:28.420415 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.420437 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:28.420459 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:28.420547 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:28.473903 1160029 cri.go:89] found id: ""
	I1002 22:04:28.473927 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.473936 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:28.473945 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:28.473958 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:28.533550 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:28.533581 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:28.631144 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:28.631180 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:28.699486 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:28.699511 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:28.764153 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:28.764195 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:28.836328 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:28.836357 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:28.979109 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:28.979149 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:29.007116 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:29.007149 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:04:31.953970 1152871 kubeadm.go:787] kubelet initialised
	I1002 22:04:31.953995 1152871 kubeadm.go:788] duration metric: took 46.414355607s waiting for restarted kubelet to initialise ...
	I1002 22:04:31.954004 1152871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:31.960759 1152871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.971397 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.971420 1152871 pod_ready.go:81] duration metric: took 10.632329ms waiting for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.971433 1152871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.977952 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.977993 1152871 pod_ready.go:81] duration metric: took 6.551371ms waiting for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.978045 1152871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.984427 1152871 pod_ready.go:92] pod "etcd-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.984454 1152871 pod_ready.go:81] duration metric: took 6.398731ms waiting for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.984471 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.991091 1152871 pod_ready.go:92] pod "kube-apiserver-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.991115 1152871 pod_ready.go:81] duration metric: took 6.63686ms waiting for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.991130 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.352175 1152871 pod_ready.go:92] pod "kube-controller-manager-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:32.352200 1152871 pod_ready.go:81] duration metric: took 361.06133ms waiting for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.352213 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.752950 1152871 pod_ready.go:92] pod "kube-proxy-pqzpr" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:32.752978 1152871 pod_ready.go:81] duration metric: took 400.756574ms waiting for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.752990 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.153061 1152871 pod_ready.go:92] pod "kube-scheduler-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:33.153089 1152871 pod_ready.go:81] duration metric: took 400.091109ms waiting for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.153098 1152871 pod_ready.go:38] duration metric: took 1.19908599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:33.153115 1152871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:04:33.162793 1152871 ops.go:34] apiserver oom_adj: -16
	I1002 22:04:33.162815 1152871 kubeadm.go:640] restartCluster took 4m33.713301751s
	I1002 22:04:33.162824 1152871 kubeadm.go:406] StartCluster complete in 4m33.866735038s
	I1002 22:04:33.162841 1152871 settings.go:142] acquiring lock: {Name:mk84ed9b341869374b10cf082af1bfa542d39dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:33.162907 1152871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:04:33.163820 1152871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/kubeconfig: {Name:mk6186c13a5b804fd6de8f5697b568acedb59886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:33.164699 1152871 kapi.go:59] client config for pause-050274: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:04:33.165290 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:04:33.165415 1152871 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 22:04:33.168719 1152871 out.go:177] * Enabled addons: 
	I1002 22:04:33.165655 1152871 config.go:182] Loaded profile config "pause-050274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 22:04:33.172217 1152871 addons.go:502] enable addons completed in 6.784519ms: enabled=[]
	I1002 22:04:33.190183 1152871 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-050274" context rescaled to 1 replicas
	I1002 22:04:33.190265 1152871 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:04:33.193364 1152871 out.go:177] * Verifying Kubernetes components...
	I1002 22:04:33.196201 1152871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:04:33.312721 1152871 node_ready.go:35] waiting up to 6m0s for node "pause-050274" to be "Ready" ...
	I1002 22:04:33.312776 1152871 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 22:04:33.352441 1152871 node_ready.go:49] node "pause-050274" has status "Ready":"True"
	I1002 22:04:33.352465 1152871 node_ready.go:38] duration metric: took 39.715486ms waiting for node "pause-050274" to be "Ready" ...
	I1002 22:04:33.352476 1152871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:33.556619 1152871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.952859 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:33.952885 1152871 pod_ready.go:81] duration metric: took 396.231877ms waiting for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.952897 1152871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.352641 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:34.352667 1152871 pod_ready.go:81] duration metric: took 399.762642ms waiting for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.352681 1152871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.752022 1152871 pod_ready.go:92] pod "etcd-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:34.752047 1152871 pod_ready.go:81] duration metric: took 399.358164ms waiting for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.752062 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.153270 1152871 pod_ready.go:92] pod "kube-apiserver-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:35.153296 1152871 pod_ready.go:81] duration metric: took 401.226316ms waiting for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.153309 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.552499 1152871 pod_ready.go:92] pod "kube-controller-manager-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:35.552524 1152871 pod_ready.go:81] duration metric: took 399.206764ms waiting for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.552537 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.953686 1152871 pod_ready.go:92] pod "kube-proxy-pqzpr" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:35.953746 1152871 pod_ready.go:81] duration metric: took 401.161865ms waiting for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.953781 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:36.353538 1152871 pod_ready.go:92] pod "kube-scheduler-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:36.353623 1152871 pod_ready.go:81] duration metric: took 399.821088ms waiting for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:36.353648 1152871 pod_ready.go:38] duration metric: took 3.0011612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:36.353702 1152871 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:04:36.353810 1152871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:04:36.367405 1152871 api_server.go:72] duration metric: took 3.177092046s to wait for apiserver process to appear ...
	I1002 22:04:36.367430 1152871 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:04:36.367448 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:04:36.376325 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 22:04:36.377804 1152871 api_server.go:141] control plane version: v1.28.2
	I1002 22:04:36.377826 1152871 api_server.go:131] duration metric: took 10.388227ms to wait for apiserver health ...
	I1002 22:04:36.377844 1152871 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:04:36.562936 1152871 system_pods.go:59] 8 kube-system pods found
	I1002 22:04:36.563045 1152871 system_pods.go:61] "coredns-5dd5756b68-cm5nm" [18849f27-d4fc-44c4-b9e0-ec7b818e9c76] Running
	I1002 22:04:36.563074 1152871 system_pods.go:61] "coredns-5dd5756b68-t6nc4" [75d35777-c673-4733-aa3b-957c2358719b] Running
	I1002 22:04:36.563124 1152871 system_pods.go:61] "etcd-pause-050274" [cbf7d6f7-1d04-4d76-98b0-76204d0bd925] Running
	I1002 22:04:36.563174 1152871 system_pods.go:61] "kindnet-ztnzr" [ececf515-ef4b-4b91-9456-6530f0dcf4c0] Running
	I1002 22:04:36.563195 1152871 system_pods.go:61] "kube-apiserver-pause-050274" [7d042ae0-0418-4e40-b874-e2fffa8e7786] Running
	I1002 22:04:36.563230 1152871 system_pods.go:61] "kube-controller-manager-pause-050274" [928688d0-f5bf-421a-b0d7-c3069a59ebb2] Running
	I1002 22:04:36.563286 1152871 system_pods.go:61] "kube-proxy-pqzpr" [434448cf-f6fd-45df-a10e-be64371b993e] Running
	I1002 22:04:36.563316 1152871 system_pods.go:61] "kube-scheduler-pause-050274" [22f7c3fc-10e8-4a56-8317-050abd85895d] Running
	I1002 22:04:36.563368 1152871 system_pods.go:74] duration metric: took 185.493954ms to wait for pod list to return data ...
	I1002 22:04:36.563407 1152871 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:04:36.753855 1152871 default_sa.go:45] found service account: "default"
	I1002 22:04:36.753963 1152871 default_sa.go:55] duration metric: took 190.526201ms for default service account to be created ...
	I1002 22:04:36.753996 1152871 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:04:36.957594 1152871 system_pods.go:86] 8 kube-system pods found
	I1002 22:04:36.957658 1152871 system_pods.go:89] "coredns-5dd5756b68-cm5nm" [18849f27-d4fc-44c4-b9e0-ec7b818e9c76] Running
	I1002 22:04:36.957687 1152871 system_pods.go:89] "coredns-5dd5756b68-t6nc4" [75d35777-c673-4733-aa3b-957c2358719b] Running
	I1002 22:04:36.957706 1152871 system_pods.go:89] "etcd-pause-050274" [cbf7d6f7-1d04-4d76-98b0-76204d0bd925] Running
	I1002 22:04:36.957741 1152871 system_pods.go:89] "kindnet-ztnzr" [ececf515-ef4b-4b91-9456-6530f0dcf4c0] Running
	I1002 22:04:36.957768 1152871 system_pods.go:89] "kube-apiserver-pause-050274" [7d042ae0-0418-4e40-b874-e2fffa8e7786] Running
	I1002 22:04:36.957789 1152871 system_pods.go:89] "kube-controller-manager-pause-050274" [928688d0-f5bf-421a-b0d7-c3069a59ebb2] Running
	I1002 22:04:36.957827 1152871 system_pods.go:89] "kube-proxy-pqzpr" [434448cf-f6fd-45df-a10e-be64371b993e] Running
	I1002 22:04:36.957851 1152871 system_pods.go:89] "kube-scheduler-pause-050274" [22f7c3fc-10e8-4a56-8317-050abd85895d] Running
	I1002 22:04:36.957873 1152871 system_pods.go:126] duration metric: took 203.818194ms to wait for k8s-apps to be running ...
	I1002 22:04:36.957908 1152871 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:04:36.958008 1152871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:04:36.976008 1152871 system_svc.go:56] duration metric: took 18.089648ms WaitForService to wait for kubelet.
	I1002 22:04:36.976079 1152871 kubeadm.go:581] duration metric: took 3.78577352s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 22:04:36.976133 1152871 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:04:37.162000 1152871 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:04:37.162074 1152871 node_conditions.go:123] node cpu capacity is 2
	I1002 22:04:37.162098 1152871 node_conditions.go:105] duration metric: took 185.948861ms to run NodePressure ...
	I1002 22:04:37.162124 1152871 start.go:228] waiting for startup goroutines ...
	I1002 22:04:37.162156 1152871 start.go:233] waiting for cluster config update ...
	I1002 22:04:37.162182 1152871 start.go:242] writing updated cluster config ...
	I1002 22:04:37.162558 1152871 ssh_runner.go:195] Run: rm -f paused
	I1002 22:04:37.291549 1152871 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 22:04:37.295187 1152871 out.go:177] * Done! kubectl is now configured to use "pause-050274" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.201214786Z" level=info msg="Created container f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110: kube-system/kube-proxy-pqzpr/kube-proxy" id=d696ffc2-baf4-446d-b13b-d258ced9a7f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.202026696Z" level=info msg="Starting container: f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110" id=9502d89c-74fe-4122-9dc4-e4b5473b3796 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.305889913Z" level=info msg="Started container" PID=4501 containerID=f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110 description=kube-system/kube-proxy-pqzpr/kube-proxy id=9502d89c-74fe-4122-9dc4-e4b5473b3796 name=/runtime.v1.RuntimeService/StartContainer sandboxID=daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.833687955Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.865689735Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.865770523Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.865793234Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.898772028Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.898819010Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.898841484Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.944155425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.944195909Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.944215289Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.969627127Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.969667094Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:04:33 pause-050274 crio[2728]: time="2023-10-02 22:04:33.219071775Z" level=info msg="Stopping container: fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d (timeout: 30s)" id=230c154e-7672-409a-82f2-7bb4709a64f6 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.338394362Z" level=info msg="Stopped container fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d: kube-system/coredns-5dd5756b68-cm5nm/coredns" id=230c154e-7672-409a-82f2-7bb4709a64f6 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.339309697Z" level=info msg="Stopping pod sandbox: 093a3b8136476ffab092d7821a9f9530a39d97ba174fd7e2d516a1e25fec8b4b" id=de830900-ab8c-45fe-8c2c-b5c9053bbd73 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.340355173Z" level=info msg="Got pod network &{Name:coredns-5dd5756b68-cm5nm Namespace:kube-system ID:093a3b8136476ffab092d7821a9f9530a39d97ba174fd7e2d516a1e25fec8b4b UID:18849f27-d4fc-44c4-b9e0-ec7b818e9c76 NetNS:/var/run/netns/81290806-fef1-4e7c-9cdb-d4297276d789 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.340518676Z" level=info msg="Deleting pod kube-system_coredns-5dd5756b68-cm5nm from CNI network \"kindnet\" (type=ptp)"
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.370439671Z" level=info msg="Stopped pod sandbox: 093a3b8136476ffab092d7821a9f9530a39d97ba174fd7e2d516a1e25fec8b4b" id=de830900-ab8c-45fe-8c2c-b5c9053bbd73 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.751645501Z" level=info msg="Removing container: fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d" id=11fe00ec-4a6a-4298-9f28-d728ea3c1944 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.782418382Z" level=info msg="Removed container fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d: kube-system/coredns-5dd5756b68-cm5nm/coredns" id=11fe00ec-4a6a-4298-9f28-d728ea3c1944 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.784455550Z" level=info msg="Removing container: 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60" id=6e22342f-dfe5-455b-ba51-c90fb0006358 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.810572468Z" level=info msg="Removed container 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60: kube-system/coredns-5dd5756b68-cm5nm/coredns" id=6e22342f-dfe5-455b-ba51-c90fb0006358 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b11cf6cfdfbdc       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   About a minute ago   Running             kindnet-cni               3                   f65507beae24a       kindnet-ztnzr
	f7e4a5a8ab188       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   About a minute ago   Running             kube-proxy                3                   daf3f7c3b2ad8       kube-proxy-pqzpr
	8632e64640b55       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Running             coredns                   3                   97e11f94c241b       coredns-5dd5756b68-t6nc4
	bbb1e358b1459       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   About a minute ago   Running             kube-controller-manager   4                   26de429363aca       kube-controller-manager-pause-050274
	07ba4f10da84d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   2 minutes ago        Running             etcd                      3                   9fcc0372960b9       etcd-pause-050274
	8eaba24185fb3       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   2 minutes ago        Exited              kube-controller-manager   3                   26de429363aca       kube-controller-manager-pause-050274
	ae4711ea86465       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   2 minutes ago        Running             kube-scheduler            3                   e42a1887c1ea2       kube-scheduler-pause-050274
	a19e78a138148       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   4 minutes ago        Running             kube-apiserver            2                   7be7cba416b4c       kube-apiserver-pause-050274
	75fb3c3a6e10b       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   4 minutes ago        Exited              kindnet-cni               2                   f65507beae24a       kindnet-ztnzr
	1c2c796686a0d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 minutes ago        Exited              coredns                   2                   97e11f94c241b       coredns-5dd5756b68-t6nc4
	47232deeac89d       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   4 minutes ago        Exited              kube-proxy                2                   daf3f7c3b2ad8       kube-proxy-pqzpr
	ce0a25ea6fc39       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   4 minutes ago        Exited              kube-scheduler            2                   e42a1887c1ea2       kube-scheduler-pause-050274
	4b6c0654becf2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   4 minutes ago        Exited              etcd                      2                   9fcc0372960b9       etcd-pause-050274
	930be0a17a5f5       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   5 minutes ago        Exited              kube-apiserver            1                   7be7cba416b4c       kube-apiserver-pause-050274
	
	* 
	* ==> coredns [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60] <==
	* 
	* ==> coredns [1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55643 - 45390 "HINFO IN 7808826954906539208.9011266081881239613. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024287056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [8632e64640b550b270e902184a6b556f14cbf57afb6741596c54feeff9272049] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58382 - 34606 "HINFO IN 7509761253326897971.1672519378214393664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020881947s
	
	* 
	* ==> coredns [fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d] <==
	* 
	* ==> describe nodes <==
	* Name:               pause-050274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-050274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86
	                    minikube.k8s.io/name=pause-050274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T21_58_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 21:58:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-050274
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 22:04:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-050274
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cf5697589254791a1250402ed5024c0
	  System UUID:                f94a85a1-cfe2-427c-9e7a-9d431d040be8
	  Boot ID:                    37d51973-0c20-4c15-81f3-7000eb353560
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-t6nc4                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m50s
	  kube-system                 etcd-pause-050274                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-ztnzr                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m50s
	  kube-system                 kube-apiserver-pause-050274             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-pause-050274    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-proxy-pqzpr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 kube-scheduler-pause-050274             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m48s                  kube-proxy       
	  Normal   Starting                 71s                    kube-proxy       
	  Normal   Starting                 4m25s                  kube-proxy       
	  Normal   NodeHasSufficientPID     6m14s (x8 over 6m14s)  kubelet          Node pause-050274 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node pause-050274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s (x8 over 6m14s)  kubelet          Node pause-050274 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m4s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m3s                   kubelet          Node pause-050274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m3s                   kubelet          Node pause-050274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m3s                   kubelet          Node pause-050274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m51s                  node-controller  Node pause-050274 event: Registered Node pause-050274 in Controller
	  Normal   NodeReady                5m19s                  kubelet          Node pause-050274 status is now: NodeReady
	  Warning  ContainerGCFailed        5m3s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientMemory  77s (x6 over 4m13s)    kubelet          Node pause-050274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    77s (x6 over 4m13s)    kubelet          Node pause-050274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     77s (x6 over 4m13s)    kubelet          Node pause-050274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           72s                    node-controller  Node pause-050274 event: Registered Node pause-050274 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000729] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000b7a96011
	[  +0.001048] FS-Cache: N-key=[8] '7e613b0000000000'
	[  +0.003162] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000c6b3040d
	[  +0.001031] FS-Cache: O-key=[8] '7e613b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000165fee4f
	[  +0.001045] FS-Cache: N-key=[8] '7e613b0000000000'
	[Oct 2 21:34] FS-Cache: Duplicate cookie detected
	[  +0.000705] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=0000000092679c6a
	[  +0.001107] FS-Cache: O-key=[8] '7d613b0000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=000000007e0e0088
	[  +0.001044] FS-Cache: N-key=[8] '7d613b0000000000'
	[  +0.310553] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001087] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000e895d03e
	[  +0.001082] FS-Cache: O-key=[8] '83613b0000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000734ba06c
	[  +0.001060] FS-Cache: N-key=[8] '83613b0000000000'
	[  +1.089292] 9pnet: p9_fd_create_tcp (1073420): problem connecting socket to 192.168.49.1
	
	* 
	* ==> etcd [07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb] <==
	* {"level":"info","ts":"2023-10-02T22:02:26.554608Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T22:02:26.554838Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:02:26.554854Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:02:26.555085Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-10-02T22:02:26.555252Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T22:02:26.555287Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T22:02:26.555298Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T22:02:26.555533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-10-02T22:02:26.555596Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-10-02T22:02:26.555671Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T22:02:26.5557Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T22:02:27.737358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-02T22:02:27.737497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-02T22:02:27.737565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-02T22:02:27.737617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.737649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.737684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.737715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.738893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-050274 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T22:02:27.738982Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:02:27.740292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T22:02:27.739003Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:02:27.741666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-02T22:02:27.74526Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T22:02:27.745335Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959] <==
	* {"level":"info","ts":"2023-10-02T21:59:58.835595Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:00:00.706286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T22:00:00.706441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T22:00:00.706502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-02T22:00:00.706546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.706586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.70664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.706676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.708678Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-050274 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T22:00:00.708924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:00:00.710121Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T22:00:00.710381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:00:00.711497Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-02T22:00:00.716198Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T22:00:00.716308Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T22:00:20.674538Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-02T22:00:20.674608Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-050274","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-02T22:00:20.674693Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T22:00:20.675244Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T22:00:20.695512Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T22:00:20.695613Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-02T22:00:20.695704Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-02T22:00:20.712899Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:00:20.713103Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:00:20.713161Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-050274","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:04:39 up  4:47,  0 users,  load average: 1.17, 1.69, 1.82
	Linux pause-050274 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f] <==
	* I1002 22:00:03.935714       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 22:00:03.937447       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1002 22:00:03.937962       1 main.go:116] setting mtu 1500 for CNI 
	I1002 22:00:03.938065       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 22:00:03.938138       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 22:00:04.225379       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:04.225716       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:05.226355       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:07.227439       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:13.923686       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:00:13.925618       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [b11cf6cfdfbdc86a6187d98e8438bab3cedfc9bc9b73c7e9dbc5c1368cfb10f4] <==
	* I1002 22:03:27.182019       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 22:03:27.182103       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1002 22:03:27.182285       1 main.go:116] setting mtu 1500 for CNI 
	I1002 22:03:27.182298       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 22:03:27.182312       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 22:03:27.831441       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:27.833379       1 main.go:227] handling current node
	I1002 22:03:37.856822       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:37.856863       1 main.go:227] handling current node
	I1002 22:03:47.869037       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:47.869062       1 main.go:227] handling current node
	I1002 22:03:57.873591       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:57.873758       1 main.go:227] handling current node
	I1002 22:04:07.885654       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:07.885776       1 main.go:227] handling current node
	I1002 22:04:17.900813       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:17.900942       1 main.go:227] handling current node
	I1002 22:04:27.905027       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:27.905059       1 main.go:227] handling current node
	I1002 22:04:37.922386       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:37.922518       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca] <==
	* W1002 21:59:53.436401       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:59:55.093990       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:59:55.318412       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1002 21:59:57.832668       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	* 
	* ==> kube-apiserver [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c] <==
	* Trace[912481136]: [17.063390309s] [17.063390309s] END
	I1002 22:03:45.227858       1 trace.go:236] Trace[2133278845]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7c3d6130-d452-4a3f-9fe2-3130dea7af9b,client:192.168.67.2,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.28.2 (linux/arm64) kubernetes/89a4ea3/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (02-Oct-2023 22:03:27.971) (total time: 17256ms):
	Trace[2133278845]: ---"limitedReadBody succeeded" len:2832 35ms (22:03:28.007)
	Trace[2133278845]: ["GuaranteedUpdate etcd3" audit-id:7c3d6130-d452-4a3f-9fe2-3130dea7af9b,key:/daemonsets/kube-system/kube-proxy,type:*apps.DaemonSet,resource:daemonsets.apps 17219ms (22:03:28.008)
	Trace[2133278845]:  ---"Txn call completed" 17192ms (22:03:45.227)]
	Trace[2133278845]: [17.256429476s] [17.256429476s] END
	I1002 22:03:45.230943       1 trace.go:236] Trace[1593935176]: "Get" accept:application/json,audit-id:e2aa66ae-369c-418f-aa6a-504e0e57493f,client:127.0.0.1,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet,user-agent:kubectl/v1.28.2 (linux/arm64) kubernetes/89a4ea3,verb:GET (02-Oct-2023 22:03:31.559) (total time: 13671ms):
	Trace[1593935176]: ---"About to write a response" 13670ms (22:03:45.230)
	Trace[1593935176]: [13.671621211s] [13.671621211s] END
	I1002 22:03:45.231789       1 trace.go:236] Trace[1086804050]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:11614c67-93bc-4d94-944e-3ee112245c8b,client:192.168.67.2,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet/status,user-agent:kube-controller-manager/v1.28.2 (linux/arm64) kubernetes/89a4ea3/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (02-Oct-2023 22:03:27.964) (total time: 17267ms):
	Trace[1086804050]: ["GuaranteedUpdate etcd3" audit-id:11614c67-93bc-4d94-944e-3ee112245c8b,key:/daemonsets/kube-system/kindnet,type:*apps.DaemonSet,resource:daemonsets.apps 17259ms (22:03:27.972)
	Trace[1086804050]:  ---"About to Encode" 78ms (22:03:28.054)
	Trace[1086804050]:  ---"Txn call completed" 17175ms (22:03:45.230)]
	Trace[1086804050]: [17.267139942s] [17.267139942s] END
	I1002 22:03:45.252676       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 22:03:45.420055       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 22:03:45.431316       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 22:03:45.521457       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:03:45.529710       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	E1002 22:03:53.893288       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1002 22:04:03.894117       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1002 22:04:13.895052       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1002 22:04:23.896304       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	I1002 22:04:33.193783       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1002 22:04:33.897467       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	* 
	* ==> kube-controller-manager [8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec] <==
	* I1002 22:02:27.108963       1 serving.go:348] Generated self-signed cert in-memory
	I1002 22:02:27.614929       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1002 22:02:27.614962       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:02:27.616267       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 22:02:27.616400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 22:02:27.617354       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1002 22:02:27.617416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1002 22:02:41.643821       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-contro
ller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	* 
	* ==> kube-controller-manager [bbb1e358b145912c2ca24bdaf715f057b64d96b3d2ad42d296be9f3ee64227dd] <==
	* I1002 22:03:27.781645       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-050274"
	I1002 22:03:27.781786       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1002 22:03:27.781968       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1002 22:03:27.782154       1 taint_manager.go:211] "Sending events to api server"
	I1002 22:03:27.785950       1 event.go:307] "Event occurred" object="pause-050274" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-050274 event: Registered Node pause-050274 in Controller"
	I1002 22:03:28.073952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="403.411662ms"
	I1002 22:03:28.085453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.933µs"
	I1002 22:03:28.113288       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:03:28.113391       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 22:03:28.155479       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:03:28.173402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.9µs"
	I1002 22:03:28.646470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.229298ms"
	I1002 22:03:28.646649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.021µs"
	I1002 22:03:28.679265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.686211ms"
	I1002 22:03:28.679398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.082µs"
	I1002 22:03:32.782364       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1002 22:04:33.201744       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1002 22:04:33.225822       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cm5nm"
	I1002 22:04:33.271617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.033127ms"
	I1002 22:04:33.289595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.90569ms"
	I1002 22:04:33.289873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.1µs"
	I1002 22:04:38.394802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.931µs"
	I1002 22:04:38.768534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.095µs"
	I1002 22:04:38.778754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.598µs"
	I1002 22:04:38.796035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.1µs"
	
	* 
	* ==> kube-proxy [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66] <==
	* I1002 22:00:04.024604       1 server_others.go:69] "Using iptables proxy"
	E1002 22:00:04.027923       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-050274": dial tcp 192.168.67.2:8443: connect: connection refused
	E1002 22:00:05.184639       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-050274": dial tcp 192.168.67.2:8443: connect: connection refused
	E1002 22:00:07.383833       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-050274": dial tcp 192.168.67.2:8443: connect: connection refused
	I1002 22:00:14.012806       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1002 22:00:14.118921       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:00:14.125460       1 server_others.go:152] "Using iptables Proxier"
	I1002 22:00:14.125507       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 22:00:14.125515       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 22:00:14.125567       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 22:00:14.125820       1 server.go:846] "Version info" version="v1.28.2"
	I1002 22:00:14.125838       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:00:14.127280       1 config.go:188] "Starting service config controller"
	I1002 22:00:14.127336       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 22:00:14.127370       1 config.go:97] "Starting endpoint slice config controller"
	I1002 22:00:14.127374       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 22:00:14.127913       1 config.go:315] "Starting node config controller"
	I1002 22:00:14.127932       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 22:00:14.227461       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 22:00:14.227516       1 shared_informer.go:318] Caches are synced for service config
	I1002 22:00:14.228127       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110] <==
	* I1002 22:03:28.225482       1 server_others.go:69] "Using iptables proxy"
	I1002 22:03:28.254620       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1002 22:03:28.301023       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:03:28.304770       1 server_others.go:152] "Using iptables Proxier"
	I1002 22:03:28.304889       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 22:03:28.304935       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 22:03:28.305083       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 22:03:28.305678       1 server.go:846] "Version info" version="v1.28.2"
	I1002 22:03:28.306003       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:03:28.307068       1 config.go:188] "Starting service config controller"
	I1002 22:03:28.307210       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 22:03:28.307314       1 config.go:97] "Starting endpoint slice config controller"
	I1002 22:03:28.307356       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 22:03:28.308094       1 config.go:315] "Starting node config controller"
	I1002 22:03:28.310701       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 22:03:28.408220       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 22:03:28.408327       1 shared_informer.go:318] Caches are synced for service config
	I1002 22:03:28.411617       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec] <==
	* I1002 22:02:27.787978       1 serving.go:348] Generated self-signed cert in-memory
	I1002 22:03:20.698146       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 22:03:20.698179       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:03:20.713936       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 22:03:20.714046       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 22:03:20.714132       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:03:20.714169       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:03:20.714211       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:03:20.714240       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 22:03:20.714620       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 22:03:20.714695       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 22:03:20.814266       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:03:20.814269       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 22:03:20.814402       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kube-scheduler [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601] <==
	* E1002 22:00:09.571010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1002 22:00:09.574380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1002 22:00:09.574418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1002 22:00:13.962669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.963683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.963838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 22:00:13.963879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 22:00:13.963953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 22:00:13.963991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 22:00:13.964069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.964105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.964177       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.964212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.964281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.964333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.964405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 22:00:13.964446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 22:00:13.964511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 22:00:13.964546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 22:00:13.964614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 22:00:13.964649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1002 22:00:15.366681       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1002 22:00:20.512563       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1002 22:00:20.513162       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E1002 22:00:20.513708       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.383681    3953 manager.go:1106] Failed to create existing container: /crio-7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Error finding container 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Status 404 returned error can't find the container with id 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.383900    3953 manager.go:1106] Failed to create existing container: /crio-9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Error finding container 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Status 404 returned error can't find the container with id 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384123    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Error finding container 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Status 404 returned error can't find the container with id 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384351    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-26de429363acad118e38006fd3aa287d2349e596f2518caa6775a3c2e2f5389e: Error finding container 26de429363acad118e38006fd3aa287d2349e596f2518caa6775a3c2e2f5389e: Status 404 returned error can't find the container with id 26de429363acad118e38006fd3aa287d2349e596f2518caa6775a3c2e2f5389e
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384577    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Error finding container daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Status 404 returned error can't find the container with id daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384846    3953 manager.go:1106] Failed to create existing container: /crio-daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Error finding container daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Status 404 returned error can't find the container with id daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.386724    3953 manager.go:1106] Failed to create existing container: /crio-f65507beae24a929e295cd7cc4cf2aff3bb95f0a3f3c05f31ea2aa3ee6a6c0c8: Error finding container f65507beae24a929e295cd7cc4cf2aff3bb95f0a3f3c05f31ea2aa3ee6a6c0c8: Status 404 returned error can't find the container with id f65507beae24a929e295cd7cc4cf2aff3bb95f0a3f3c05f31ea2aa3ee6a6c0c8
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.386885    3953 manager.go:1106] Failed to create existing container: /crio-97e11f94c241b39d6d247aebcd08bbfc63e977ed05181b85701210333d18c68c: Error finding container 97e11f94c241b39d6d247aebcd08bbfc63e977ed05181b85701210333d18c68c: Status 404 returned error can't find the container with id 97e11f94c241b39d6d247aebcd08bbfc63e977ed05181b85701210333d18c68c
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.387048    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Error finding container 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Status 404 returned error can't find the container with id 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.387336    3953 manager.go:1106] Failed to create existing container: /crio-e42a1887c1ea2f468424ded530ce061a02849f5464b757887cee401d73deabca: Error finding container e42a1887c1ea2f468424ded530ce061a02849f5464b757887cee401d73deabca: Status 404 returned error can't find the container with id e42a1887c1ea2f468424ded530ce061a02849f5464b757887cee401d73deabca
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.468477    3953 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw2q6\" (UniqueName: \"kubernetes.io/projected/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-kube-api-access-jw2q6\") pod \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\" (UID: \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\") "
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.468532    3953 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-config-volume\") pod \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\" (UID: \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\") "
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.469557    3953 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-config-volume" (OuterVolumeSpecName: "config-volume") pod "18849f27-d4fc-44c4-b9e0-ec7b818e9c76" (UID: "18849f27-d4fc-44c4-b9e0-ec7b818e9c76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.474550    3953 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-kube-api-access-jw2q6" (OuterVolumeSpecName: "kube-api-access-jw2q6") pod "18849f27-d4fc-44c4-b9e0-ec7b818e9c76" (UID: "18849f27-d4fc-44c4-b9e0-ec7b818e9c76"). InnerVolumeSpecName "kube-api-access-jw2q6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.569350    3953 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jw2q6\" (UniqueName: \"kubernetes.io/projected/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-kube-api-access-jw2q6\") on node \"pause-050274\" DevicePath \"\""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.569394    3953 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-config-volume\") on node \"pause-050274\" DevicePath \"\""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.749523    3953 scope.go:117] "RemoveContainer" containerID="fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.783190    3953 scope.go:117] "RemoveContainer" containerID="1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.811423    3953 scope.go:117] "RemoveContainer" containerID="fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: E1002 22:04:38.811924    3953 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist" containerID="fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.812026    3953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"} err="failed to get container status \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": rpc error: code = NotFound desc = could not find container \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.812044    3953 scope.go:117] "RemoveContainer" containerID="1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: E1002 22:04:38.812523    3953 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist" containerID="1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.812558    3953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"} err="failed to get container status \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": rpc error: code = NotFound desc = could not find container \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist"
	Oct 02 22:04:40 pause-050274 kubelet[3953]: I1002 22:04:40.122562    3953 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="18849f27-d4fc-44c4-b9e0-ec7b818e9c76" path="/var/lib/kubelet/pods/18849f27-d4fc-44c4-b9e0-ec7b818e9c76/volumes"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 22:04:39.155406 1165301 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:04:39.151984    5058 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist" containerID="1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	time="2023-10-02T22:04:39Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist"
	 output: "\n** stderr ** \nE1002 22:04:39.151984    5058 remote_runtime.go:625] \"ContainerStatus from runtime service failed\" err=\"rpc error: code = NotFound desc = could not find container \\\"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\\\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist\" containerID=\"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\"\ntime=\"2023-10-02T22:04:39Z\" level=fatal msg=\"rpc error: code = NotFound desc = could not find container \\\"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\\\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist\"\n\n** /stderr **"
	E1002 22:04:39.322319 1165301 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:04:39.318867    5080 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist" containerID="fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"
	time="2023-10-02T22:04:39Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist"
	 output: "\n** stderr ** \nE1002 22:04:39.318867    5080 remote_runtime.go:625] \"ContainerStatus from runtime service failed\" err=\"rpc error: code = NotFound desc = could not find container \\\"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\\\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist\" containerID=\"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\"\ntime=\"2023-10-02T22:04:39Z\" level=fatal msg=\"rpc error: code = NotFound desc = could not find container \\\"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\\\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist\"\n\n** /stderr **"
	! unable to fetch logs for: coredns [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60], coredns [fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d]

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-050274 -n pause-050274
helpers_test.go:261: (dbg) Run:  kubectl --context pause-050274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-050274
helpers_test.go:235: (dbg) docker inspect pause-050274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f",
	        "Created": "2023-10-02T21:58:04.69162597Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1148181,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-02T21:58:05.302377128Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/hosts",
	        "LogPath": "/var/lib/docker/containers/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f-json.log",
	        "Name": "/pause-050274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-050274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-050274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00-init/diff:/var/lib/docker/overlay2/211b77e87812a1edc3010e11f8a4e888a425a4aebe773b65e967cb7beecedbef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e4cb105ccdfc0149fc694febc3212b1079fbe49e1d7e08c4772891c650c6fb00/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-050274",
	                "Source": "/var/lib/docker/volumes/pause-050274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-050274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-050274",
	                "name.minikube.sigs.k8s.io": "pause-050274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "868a09bff40049a86186cd21e35329b3ebb6f9735b13af31d4678253a1fb079e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33876"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33877"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/868a09bff400",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-050274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "cbe09fdff1d2",
	                        "pause-050274"
	                    ],
	                    "NetworkID": "8c24a6fb62556caa93968b9db047d26d1e3c64ab7847dac2444544692be83d8b",
	                    "EndpointID": "a41c97e3d2840809f494ff8520c86fbbae91bcad02737a18c7e268362086c34b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-050274 -n pause-050274
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p pause-050274 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p pause-050274 logs -n 25: (1.91875839s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p test-preload-673079         | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:54 UTC | 02 Oct 23 21:54 UTC |
	| start   | -p test-preload-673079         | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:54 UTC | 02 Oct 23 21:55 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| image   | test-preload-673079 image list | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:55 UTC | 02 Oct 23 21:55 UTC |
	| delete  | -p test-preload-673079         | test-preload-673079         | jenkins | v1.31.2 | 02 Oct 23 21:55 UTC | 02 Oct 23 21:55 UTC |
	| start   | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:55 UTC | 02 Oct 23 21:56 UTC |
	|         | --memory=2048 --driver=docker  |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 5m                  |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC | 02 Oct 23 21:56 UTC |
	|         | --cancel-scheduled             |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC |                     |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:56 UTC | 02 Oct 23 21:57 UTC |
	|         | --schedule 15s                 |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-908756       | scheduled-stop-908756       | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC | 02 Oct 23 21:57 UTC |
	| start   | -p insufficient-storage-768004 | insufficient-storage-768004 | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC |                     |
	|         | --memory=2048 --output=json    |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-768004 | insufficient-storage-768004 | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC | 02 Oct 23 21:57 UTC |
	| start   | -p pause-050274 --memory=2048  | pause-050274                | jenkins | v1.31.2 | 02 Oct 23 21:57 UTC | 02 Oct 23 21:59 UTC |
	|         | --install-addons=false         |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker     |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p pause-050274                | pause-050274                | jenkins | v1.31.2 | 02 Oct 23 21:59 UTC | 02 Oct 23 22:04 UTC |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| start   | -p missing-upgrade-123767      | missing-upgrade-123767      | jenkins | v1.31.2 | 02 Oct 23 21:59 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-123767      | missing-upgrade-123767      | jenkins | v1.31.2 | 02 Oct 23 22:00 UTC | 02 Oct 23 22:00 UTC |
	| start   | -p kubernetes-upgrade-573624   | kubernetes-upgrade-573624   | jenkins | v1.31.2 | 02 Oct 23 22:00 UTC | 02 Oct 23 22:01 UTC |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-573624   | kubernetes-upgrade-573624   | jenkins | v1.31.2 | 02 Oct 23 22:01 UTC | 02 Oct 23 22:01 UTC |
	| start   | -p kubernetes-upgrade-573624   | kubernetes-upgrade-573624   | jenkins | v1.31.2 | 02 Oct 23 22:01 UTC |                     |
	|         | --memory=2200                  |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                             |         |         |                     |                     |
	|         | --alsologtostderr              |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker           |                             |         |         |                     |                     |
	|         | --container-runtime=crio       |                             |         |         |                     |                     |
	|---------|--------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 22:01:17
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 22:01:17.301072 1160029 out.go:296] Setting OutFile to fd 1 ...
	I1002 22:01:17.301289 1160029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:17.301316 1160029 out.go:309] Setting ErrFile to fd 2...
	I1002 22:01:17.301335 1160029 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:01:17.301602 1160029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 22:01:17.301990 1160029 out.go:303] Setting JSON to false
	I1002 22:01:17.303107 1160029 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17025,"bootTime":1696267053,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 22:01:17.303258 1160029 start.go:138] virtualization:  
	I1002 22:01:17.307215 1160029 out.go:177] * [kubernetes-upgrade-573624] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 22:01:17.309400 1160029 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 22:01:17.311149 1160029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:01:17.309487 1160029 notify.go:220] Checking for updates...
	I1002 22:01:17.315627 1160029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:01:17.317584 1160029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 22:01:17.319583 1160029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:01:17.321852 1160029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:01:17.324222 1160029 config.go:182] Loaded profile config "kubernetes-upgrade-573624": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1002 22:01:17.324749 1160029 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 22:01:17.352396 1160029 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 22:01:17.352496 1160029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:17.442916 1160029 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 22:01:17.433082821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:01:17.443022 1160029 docker.go:294] overlay module found
	I1002 22:01:17.445999 1160029 out.go:177] * Using the docker driver based on existing profile
	I1002 22:01:17.447935 1160029 start.go:298] selected driver: docker
	I1002 22:01:17.447950 1160029 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-573624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-573624 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 22:01:17.448046 1160029 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:01:17.448682 1160029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:01:17.528211 1160029 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 22:01:17.518778279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:01:17.528548 1160029 cni.go:84] Creating CNI manager for ""
	I1002 22:01:17.528566 1160029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:01:17.528578 1160029 start_flags.go:321] config:
	{Name:kubernetes-upgrade-573624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-573624 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 22:01:17.530857 1160029 out.go:177] * Starting control plane node kubernetes-upgrade-573624 in cluster kubernetes-upgrade-573624
	I1002 22:01:17.532933 1160029 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 22:01:17.534787 1160029 out.go:177] * Pulling base image ...
	I1002 22:01:17.536754 1160029 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 22:01:17.536807 1160029 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 22:01:17.536821 1160029 cache.go:57] Caching tarball of preloaded images
	I1002 22:01:17.536914 1160029 preload.go:174] Found /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1002 22:01:17.536927 1160029 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1002 22:01:17.537032 1160029 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/config.json ...
	I1002 22:01:17.537265 1160029 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 22:01:17.562766 1160029 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I1002 22:01:17.562795 1160029 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I1002 22:01:17.562815 1160029 cache.go:195] Successfully downloaded all kic artifacts
	I1002 22:01:17.562889 1160029 start.go:365] acquiring machines lock for kubernetes-upgrade-573624: {Name:mk1c322b4ea74092c8156e6c24f3801e5e50ca23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:01:17.562954 1160029 start.go:369] acquired machines lock for "kubernetes-upgrade-573624" in 41.96µs
	I1002 22:01:17.562973 1160029 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:01:17.562979 1160029 fix.go:54] fixHost starting: 
	I1002 22:01:17.563281 1160029 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-573624 --format={{.State.Status}}
	I1002 22:01:17.583303 1160029 fix.go:102] recreateIfNeeded on kubernetes-upgrade-573624: state=Stopped err=<nil>
	W1002 22:01:17.583345 1160029 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 22:01:17.585740 1160029 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-573624" ...
	I1002 22:01:15.042054 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:15.042109 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:15.042126 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:17.053232 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:17.053269 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:17.053284 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:19.064083 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:19.064114 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:19.064126 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:17.587882 1160029 cli_runner.go:164] Run: docker start kubernetes-upgrade-573624
	I1002 22:01:17.903942 1160029 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-573624 --format={{.State.Status}}
	I1002 22:01:17.923529 1160029 kic.go:426] container "kubernetes-upgrade-573624" state is running.
	I1002 22:01:17.923912 1160029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-573624
	I1002 22:01:17.948857 1160029 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/config.json ...
	I1002 22:01:17.949084 1160029 machine.go:88] provisioning docker machine ...
	I1002 22:01:17.949104 1160029 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-573624"
	I1002 22:01:17.949153 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:17.971295 1160029 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:17.972097 1160029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33899 <nil> <nil>}
	I1002 22:01:17.972118 1160029 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-573624 && echo "kubernetes-upgrade-573624" | sudo tee /etc/hostname
	I1002 22:01:17.972831 1160029 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:01:21.141551 1160029 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-573624
	
	I1002 22:01:21.141640 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:21.168480 1160029 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:21.168893 1160029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33899 <nil> <nil>}
	I1002 22:01:21.168918 1160029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-573624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-573624/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-573624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:01:21.310974 1160029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:01:21.311009 1160029 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 22:01:21.311058 1160029 ubuntu.go:177] setting up certificates
	I1002 22:01:21.311068 1160029 provision.go:83] configureAuth start
	I1002 22:01:21.311135 1160029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-573624
	I1002 22:01:21.328855 1160029 provision.go:138] copyHostCerts
	I1002 22:01:21.328949 1160029 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 22:01:21.328977 1160029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 22:01:21.329058 1160029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 22:01:21.329167 1160029 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 22:01:21.329178 1160029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 22:01:21.329385 1160029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 22:01:21.329490 1160029 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 22:01:21.329502 1160029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 22:01:21.329537 1160029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 22:01:21.329592 1160029 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-573624 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-573624]
	I1002 22:01:21.822768 1160029 provision.go:172] copyRemoteCerts
	I1002 22:01:21.822837 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:01:21.822880 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:21.844800 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:21.943929 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 22:01:21.972605 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 22:01:22.003447 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 22:01:22.034448 1160029 provision.go:86] duration metric: configureAuth took 723.364769ms
	I1002 22:01:22.034473 1160029 ubuntu.go:193] setting minikube options for container-runtime
	I1002 22:01:22.034683 1160029 config.go:182] Loaded profile config "kubernetes-upgrade-573624": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 22:01:22.034789 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.053528 1160029 main.go:141] libmachine: Using SSH client type: native
	I1002 22:01:22.053977 1160029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33899 <nil> <nil>}
	I1002 22:01:22.054002 1160029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:01:22.397837 1160029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:01:22.397901 1160029 machine.go:91] provisioned docker machine in 4.448806793s
	I1002 22:01:22.397939 1160029 start.go:300] post-start starting for "kubernetes-upgrade-573624" (driver="docker")
	I1002 22:01:22.397980 1160029 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:01:22.398112 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:01:22.398193 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.418981 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.520542 1160029 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:01:22.524898 1160029 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:01:22.524948 1160029 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 22:01:22.524960 1160029 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 22:01:22.524972 1160029 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1002 22:01:22.524987 1160029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 22:01:22.525049 1160029 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 22:01:22.525130 1160029 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 22:01:22.525290 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:01:22.536275 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:01:22.564980 1160029 start.go:303] post-start completed in 166.99618ms
	I1002 22:01:22.565065 1160029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:01:22.565108 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.583452 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.679587 1160029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:01:22.685419 1160029 fix.go:56] fixHost completed within 5.122432372s
	I1002 22:01:22.685441 1160029 start.go:83] releasing machines lock for "kubernetes-upgrade-573624", held for 5.12247919s
	I1002 22:01:22.685512 1160029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-573624
	I1002 22:01:22.702921 1160029 ssh_runner.go:195] Run: cat /version.json
	I1002 22:01:22.702980 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.703214 1160029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:01:22.703266 1160029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-573624
	I1002 22:01:22.734432 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.739577 1160029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/kubernetes-upgrade-573624/id_rsa Username:docker}
	I1002 22:01:22.829880 1160029 ssh_runner.go:195] Run: systemctl --version
	I1002 22:01:22.965510 1160029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:01:23.114996 1160029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 22:01:23.120748 1160029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:01:23.131750 1160029 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 22:01:23.131853 1160029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:01:23.142398 1160029 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 22:01:23.142467 1160029 start.go:469] detecting cgroup driver to use...
	I1002 22:01:23.142521 1160029 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 22:01:23.142596 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:01:23.156397 1160029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:01:23.170196 1160029 docker.go:197] disabling cri-docker service (if available) ...
	I1002 22:01:23.170262 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:01:23.184674 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:01:23.198262 1160029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 22:01:23.291277 1160029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:01:23.387373 1160029 docker.go:213] disabling docker service ...
	I1002 22:01:23.387489 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:01:23.403114 1160029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:01:23.416826 1160029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:01:23.508133 1160029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:01:23.618156 1160029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:01:23.633648 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:01:23.657857 1160029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1002 22:01:23.657925 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.671403 1160029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1002 22:01:23.671489 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.684594 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.696616 1160029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:01:23.709668 1160029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 22:01:23.720848 1160029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 22:01:23.731118 1160029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 22:01:23.741406 1160029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 22:01:23.832504 1160029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 22:01:23.951669 1160029 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 22:01:23.951788 1160029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 22:01:23.957007 1160029 start.go:537] Will wait 60s for crictl version
	I1002 22:01:23.957068 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:01:23.961747 1160029 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 22:01:24.007762 1160029 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1002 22:01:24.007858 1160029 ssh_runner.go:195] Run: crio --version
	I1002 22:01:24.054760 1160029 ssh_runner.go:195] Run: crio --version
	I1002 22:01:24.105154 1160029 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1002 22:01:21.073618 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:21.073652 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:21.073671 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:23.083475 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:23.083507 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:23.083519 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:24.107584 1160029 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-573624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 22:01:24.127189 1160029 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1002 22:01:24.132561 1160029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:01:24.146818 1160029 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 22:01:24.146887 1160029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:01:24.193899 1160029 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1002 22:01:24.193983 1160029 ssh_runner.go:195] Run: which lz4
	I1002 22:01:24.198589 1160029 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1002 22:01:24.202945 1160029 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1002 22:01:24.202978 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (389006849 bytes)
	I1002 22:01:26.298276 1160029 crio.go:444] Took 2.099727 seconds to copy over tarball
	I1002 22:01:26.298347 1160029 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1002 22:01:25.093166 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:25.093198 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:25.093231 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:01:27.103646 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:01:27.103675 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:01:27.103705 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:01:27.103768 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:01:27.178139 1152871 cri.go:89] found id: "a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:01:27.178158 1152871 cri.go:89] found id: "930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:01:27.178164 1152871 cri.go:89] found id: ""
	I1002 22:01:27.178172 1152871 logs.go:284] 2 containers: [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca]
	I1002 22:01:27.178226 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.184859 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.190133 1152871 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:01:27.190238 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:01:27.252654 1152871 cri.go:89] found id: "4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:01:27.252674 1152871 cri.go:89] found id: ""
	I1002 22:01:27.252682 1152871 logs.go:284] 1 containers: [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959]
	I1002 22:01:27.252737 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.258975 1152871 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:01:27.259046 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:01:27.328041 1152871 cri.go:89] found id: "1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:01:27.328060 1152871 cri.go:89] found id: "1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:01:27.328066 1152871 cri.go:89] found id: ""
	I1002 22:01:27.328073 1152871 logs.go:284] 2 containers: [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794]
	I1002 22:01:27.328132 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.334697 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.341178 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:01:27.341417 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:01:27.414498 1152871 cri.go:89] found id: "ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:01:27.414573 1152871 cri.go:89] found id: ""
	I1002 22:01:27.414596 1152871 logs.go:284] 1 containers: [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601]
	I1002 22:01:27.414689 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.420756 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:01:27.420872 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:01:27.482917 1152871 cri.go:89] found id: "47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:01:27.482983 1152871 cri.go:89] found id: ""
	I1002 22:01:27.483005 1152871 logs.go:284] 1 containers: [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66]
	I1002 22:01:27.483094 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.491225 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:01:27.491339 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:01:27.586165 1152871 cri.go:89] found id: "b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:01:27.586249 1152871 cri.go:89] found id: ""
	I1002 22:01:27.586272 1152871 logs.go:284] 1 containers: [b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968]
	I1002 22:01:27.586359 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.591388 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:01:27.591506 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:01:27.653897 1152871 cri.go:89] found id: "75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:01:27.653973 1152871 cri.go:89] found id: ""
	I1002 22:01:27.654004 1152871 logs.go:284] 1 containers: [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f]
	I1002 22:01:27.654086 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:01:27.659314 1152871 logs.go:123] Gathering logs for etcd [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959] ...
	I1002 22:01:27.659384 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:01:27.755477 1152871 logs.go:123] Gathering logs for kindnet [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f] ...
	I1002 22:01:27.755550 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:01:27.826165 1152871 logs.go:123] Gathering logs for container status ...
	I1002 22:01:27.826190 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:01:27.913566 1152871 logs.go:123] Gathering logs for kubelet ...
	I1002 22:01:27.913643 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:01:28.055168 1152871 logs.go:123] Gathering logs for coredns [1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794] ...
	I1002 22:01:28.055246 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:01:28.145661 1152871 logs.go:123] Gathering logs for kube-scheduler [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601] ...
	I1002 22:01:28.145738 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:01:28.243382 1152871 logs.go:123] Gathering logs for kube-controller-manager [b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968] ...
	I1002 22:01:28.243455 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:01:28.300322 1152871 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:01:28.300349 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:01:28.396743 1152871 logs.go:123] Gathering logs for coredns [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60] ...
	I1002 22:01:28.396780 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:01:28.454715 1152871 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:01:28.454746 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:01:28.864597 1160029 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.566218076s)
	I1002 22:01:28.864623 1160029 crio.go:451] Took 2.566322 seconds to extract the tarball
	I1002 22:01:28.864633 1160029 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1002 22:01:28.913893 1160029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 22:01:28.967299 1160029 crio.go:496] all images are preloaded for cri-o runtime.
	I1002 22:01:28.967322 1160029 cache_images.go:84] Images are preloaded, skipping loading
	I1002 22:01:28.967409 1160029 ssh_runner.go:195] Run: crio config
	I1002 22:01:29.031840 1160029 cni.go:84] Creating CNI manager for ""
	I1002 22:01:29.031867 1160029 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:01:29.031890 1160029 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 22:01:29.031912 1160029 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-573624 NodeName:kubernetes-upgrade-573624 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 22:01:29.032053 1160029 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-573624"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 22:01:29.032128 1160029 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-573624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-573624 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 22:01:29.032195 1160029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 22:01:29.043315 1160029 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 22:01:29.043391 1160029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 22:01:29.054082 1160029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1002 22:01:29.075530 1160029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 22:01:29.097542 1160029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1002 22:01:29.119835 1160029 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1002 22:01:29.124538 1160029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 22:01:29.138300 1160029 certs.go:56] Setting up /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624 for IP: 192.168.76.2
	I1002 22:01:29.138332 1160029 certs.go:190] acquiring lock for shared ca certs: {Name:mk89a4b04b53a0a6e55cb9c88355018fadb8a1cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:01:29.138469 1160029 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key
	I1002 22:01:29.138517 1160029 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key
	I1002 22:01:29.138594 1160029 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/client.key
	I1002 22:01:29.138667 1160029 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/apiserver.key.31bdca25
	I1002 22:01:29.138712 1160029 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/proxy-client.key
	I1002 22:01:29.138826 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem (1338 bytes)
	W1002 22:01:29.138867 1160029 certs.go:433] ignoring /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732_empty.pem, impossibly tiny 0 bytes
	I1002 22:01:29.138880 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 22:01:29.138906 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem (1082 bytes)
	I1002 22:01:29.138936 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem (1123 bytes)
	I1002 22:01:29.138965 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem (1679 bytes)
	I1002 22:01:29.139015 1160029 certs.go:437] found cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:01:29.139731 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 22:01:29.170717 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 22:01:29.199043 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 22:01:29.228643 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 22:01:29.258148 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 22:01:29.287230 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 22:01:29.315317 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 22:01:29.344146 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1002 22:01:29.373683 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /usr/share/ca-certificates/10477322.pem (1708 bytes)
	I1002 22:01:29.403887 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 22:01:29.433036 1160029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/1047732.pem --> /usr/share/ca-certificates/1047732.pem (1338 bytes)
	I1002 22:01:29.461573 1160029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 22:01:29.482960 1160029 ssh_runner.go:195] Run: openssl version
	I1002 22:01:29.490014 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10477322.pem && ln -fs /usr/share/ca-certificates/10477322.pem /etc/ssl/certs/10477322.pem"
	I1002 22:01:29.502638 1160029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10477322.pem
	I1002 22:01:29.507351 1160029 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 21:30 /usr/share/ca-certificates/10477322.pem
	I1002 22:01:29.507419 1160029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10477322.pem
	I1002 22:01:29.516349 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10477322.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 22:01:29.527585 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 22:01:29.539615 1160029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:01:29.544423 1160029 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 21:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:01:29.544498 1160029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 22:01:29.553304 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 22:01:29.564623 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1047732.pem && ln -fs /usr/share/ca-certificates/1047732.pem /etc/ssl/certs/1047732.pem"
	I1002 22:01:29.576409 1160029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1047732.pem
	I1002 22:01:29.581080 1160029 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 21:30 /usr/share/ca-certificates/1047732.pem
	I1002 22:01:29.581140 1160029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1047732.pem
	I1002 22:01:29.589759 1160029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1047732.pem /etc/ssl/certs/51391683.0"
	I1002 22:01:29.600441 1160029 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 22:01:29.604807 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 22:01:29.613117 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 22:01:29.621990 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 22:01:29.630417 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 22:01:29.639154 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 22:01:29.647779 1160029 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 22:01:29.656437 1160029 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-573624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-573624 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 22:01:29.656531 1160029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 22:01:29.656592 1160029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:01:29.699807 1160029 cri.go:89] found id: ""
	I1002 22:01:29.699951 1160029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 22:01:29.711566 1160029 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 22:01:29.711637 1160029 kubeadm.go:636] restartCluster start
	I1002 22:01:29.711748 1160029 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 22:01:29.722844 1160029 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 22:01:29.723646 1160029 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-573624" does not appear in /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:01:29.723991 1160029 kubeconfig.go:146] "kubernetes-upgrade-573624" context is missing from /home/jenkins/minikube-integration/17323-1042317/kubeconfig - will repair!
	I1002 22:01:29.724599 1160029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/kubeconfig: {Name:mk6186c13a5b804fd6de8f5697b568acedb59886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:01:29.725731 1160029 kapi.go:59] client config for kubernetes-upgrade-573624: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/kubernetes-upgrade-573624/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:01:29.726791 1160029 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 22:01:29.737421 1160029 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-10-02 22:00:38.206983393 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-10-02 22:01:29.114743692 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-573624"
	   kubeletExtraArgs:
	     node-ip: 192.168.76.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-573624
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.28.2
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1002 22:01:29.737453 1160029 kubeadm.go:1128] stopping kube-system containers ...
	I1002 22:01:29.737465 1160029 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 22:01:29.737522 1160029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 22:01:29.780270 1160029 cri.go:89] found id: ""
	I1002 22:01:29.780340 1160029 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 22:01:29.794729 1160029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 22:01:29.806216 1160029 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5707 Oct  2 22:00 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5739 Oct  2 22:00 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Oct  2 22:00 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5691 Oct  2 22:00 /etc/kubernetes/scheduler.conf
	
	I1002 22:01:29.806331 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 22:01:29.817581 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 22:01:29.828496 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 22:01:29.839465 1160029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 22:01:29.850242 1160029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 22:01:29.861166 1160029 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 22:01:29.861193 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:29.921282 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.272796 1160029 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.351475313s)
	I1002 22:01:31.272831 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.439481 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.529763 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:01:31.621746 1160029 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:01:31.621854 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:31.639944 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:32.161567 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:32.661553 1160029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:01:32.699126 1160029 api_server.go:72] duration metric: took 1.077400004s to wait for apiserver process to appear ...
	I1002 22:01:32.699152 1160029 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:01:32.699170 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:37.699999 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:37.700058 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:42.701286 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:43.202065 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:48.202878 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:48.202927 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:53.203165 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:01:53.203209 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:53.744340 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:33610->192.168.76.2:8443: read: connection reset by peer
	I1002 22:01:53.744380 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:53.744647 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:01:54.202297 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:54.202700 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:01:54.702337 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:01:54.702783 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:01:55.202333 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:00.202766 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:00.202823 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:05.203234 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:05.203288 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:10.203519 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:10.203559 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:15.204296 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:15.204340 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.097474 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:45856->192.168.76.2:8443: read: connection reset by peer
	I1002 22:02:16.097512 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.097798 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:16.202094 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.202604 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:16.702366 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:16.702884 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:17.201391 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:17.201790 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:17.701448 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:17.701848 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:18.202375 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:18.202795 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:18.702343 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:18.702764 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:19.201406 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:19.201818 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:19.701433 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:19.701825 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:20.201662 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:20.202091 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:20.701457 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:20.701869 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:21.201411 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:21.201844 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:21.701404 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:21.701886 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:22.202401 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:22.202791 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:22.702386 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:22.702812 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:23.201449 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:23.201880 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:23.702261 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:23.702659 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:24.202254 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:24.202702 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:24.702361 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:24.702885 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:25.202410 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:25.202880 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:25.702414 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:25.702862 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:26.202362 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:26.202857 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:26.701436 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:26.701778 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:27.202320 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:27.202667 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:28.609083 1152871 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (1m0.154303837s)
	W1002 22:02:28.609120 1152871 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: 
	** stderr ** 
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	
	** /stderr **
	I1002 22:02:28.609129 1152871 logs.go:123] Gathering logs for kube-apiserver [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c] ...
	I1002 22:02:28.609139 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:02:28.668679 1152871 logs.go:123] Gathering logs for kube-proxy [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66] ...
	I1002 22:02:28.668716 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:02:28.711450 1152871 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:28.711476 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:28.733260 1152871 logs.go:123] Gathering logs for kube-apiserver [930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca] ...
	I1002 22:02:28.733291 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:02:27.701429 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:27.701895 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:28.201433 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:28.201843 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:28.702378 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:28.702780 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:29.201401 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:29.201867 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:29.701417 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:29.701834 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:30.201623 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:30.202050 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:30.701473 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:30.701927 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:31.201563 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:31.201980 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:31.701445 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:31.701811 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:32.202362 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:32.202713 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:31.289328 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:02:31.298641 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1002 22:02:31.298672 1152871 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1002 22:02:31.298699 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:31.298765 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:31.344117 1152871 cri.go:89] found id: "a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:02:31.344143 1152871 cri.go:89] found id: "930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:02:31.344149 1152871 cri.go:89] found id: ""
	I1002 22:02:31.344157 1152871 logs.go:284] 2 containers: [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca]
	I1002 22:02:31.344221 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.349176 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.354197 1152871 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:31.354347 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:31.397684 1152871 cri.go:89] found id: "07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb"
	I1002 22:02:31.397752 1152871 cri.go:89] found id: "4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:02:31.397772 1152871 cri.go:89] found id: ""
	I1002 22:02:31.397796 1152871 logs.go:284] 2 containers: [07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb 4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959]
	I1002 22:02:31.397875 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.402573 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.407234 1152871 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:31.407332 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:31.455394 1152871 cri.go:89] found id: "1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:02:31.455413 1152871 cri.go:89] found id: "1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:02:31.455420 1152871 cri.go:89] found id: ""
	I1002 22:02:31.455427 1152871 logs.go:284] 2 containers: [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794]
	I1002 22:02:31.455486 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.460261 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.464761 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:31.464852 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:31.510583 1152871 cri.go:89] found id: "ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec"
	I1002 22:02:31.510607 1152871 cri.go:89] found id: "ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:02:31.510613 1152871 cri.go:89] found id: ""
	I1002 22:02:31.510620 1152871 logs.go:284] 2 containers: [ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601]
	I1002 22:02:31.510680 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.515463 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.520123 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:31.520193 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:31.569943 1152871 cri.go:89] found id: "47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:02:31.569966 1152871 cri.go:89] found id: ""
	I1002 22:02:31.569975 1152871 logs.go:284] 1 containers: [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66]
	I1002 22:02:31.570035 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.574966 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:31.575039 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:31.625076 1152871 cri.go:89] found id: "8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec"
	I1002 22:02:31.625101 1152871 cri.go:89] found id: "b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:02:31.625107 1152871 cri.go:89] found id: ""
	I1002 22:02:31.625115 1152871 logs.go:284] 2 containers: [8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968]
	I1002 22:02:31.625176 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.629870 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.634481 1152871 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:31.634552 1152871 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:31.697232 1152871 cri.go:89] found id: "75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:02:31.697253 1152871 cri.go:89] found id: ""
	I1002 22:02:31.697262 1152871 logs.go:284] 1 containers: [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f]
	I1002 22:02:31.697318 1152871 ssh_runner.go:195] Run: which crictl
	I1002 22:02:31.702242 1152871 logs.go:123] Gathering logs for kube-scheduler [ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec] ...
	I1002 22:02:31.702291 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec"
	I1002 22:02:31.747183 1152871 logs.go:123] Gathering logs for kube-scheduler [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601] ...
	I1002 22:02:31.747208 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601"
	I1002 22:02:31.822455 1152871 logs.go:123] Gathering logs for kindnet [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f] ...
	I1002 22:02:31.822527 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f"
	I1002 22:02:31.865981 1152871 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:31.866008 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:31.949791 1152871 logs.go:123] Gathering logs for kube-apiserver [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c] ...
	I1002 22:02:31.949827 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c"
	I1002 22:02:32.030765 1152871 logs.go:123] Gathering logs for kube-apiserver [930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca] ...
	I1002 22:02:32.030800 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca"
	I1002 22:02:32.079615 1152871 logs.go:123] Gathering logs for etcd [07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb] ...
	I1002 22:02:32.079645 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb"
	I1002 22:02:32.137084 1152871 logs.go:123] Gathering logs for kube-controller-manager [b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968] ...
	I1002 22:02:32.137114 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b405b5463e77b97f62e9757632aef73eeda5bc4a01f68ea8a63479b2c4a31968"
	I1002 22:02:32.180900 1152871 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:32.180928 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:32.202963 1152871 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:32.202992 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:02:32.702273 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:32.702366 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:32.747144 1160029 cri.go:89] found id: "e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:32.747165 1160029 cri.go:89] found id: ""
	I1002 22:02:32.747173 1160029 logs.go:284] 1 containers: [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]
	I1002 22:02:32.747226 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:32.752076 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:32.752150 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:32.799411 1160029 cri.go:89] found id: ""
	I1002 22:02:32.799437 1160029 logs.go:284] 0 containers: []
	W1002 22:02:32.799447 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:32.799453 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:32.799512 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:32.841151 1160029 cri.go:89] found id: ""
	I1002 22:02:32.841179 1160029 logs.go:284] 0 containers: []
	W1002 22:02:32.841188 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:32.841194 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:32.841275 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:32.883371 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:32.883390 1160029 cri.go:89] found id: ""
	I1002 22:02:32.883399 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:32.883455 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:32.888018 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:32.888089 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:32.929755 1160029 cri.go:89] found id: ""
	I1002 22:02:32.929778 1160029 logs.go:284] 0 containers: []
	W1002 22:02:32.929786 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:32.929792 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:32.929854 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:32.972499 1160029 cri.go:89] found id: "a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:32.972519 1160029 cri.go:89] found id: ""
	I1002 22:02:32.972527 1160029 logs.go:284] 1 containers: [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a]
	I1002 22:02:32.972581 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:32.977180 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:32.977286 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:33.026796 1160029 cri.go:89] found id: ""
	I1002 22:02:33.026818 1160029 logs.go:284] 0 containers: []
	W1002 22:02:33.026828 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:33.026835 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:33.026902 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:33.070729 1160029 cri.go:89] found id: ""
	I1002 22:02:33.070753 1160029 logs.go:284] 0 containers: []
	W1002 22:02:33.070761 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:33.070772 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:33.070784 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:33.092496 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:33.092525 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:33.170625 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:33.170647 1160029 logs.go:123] Gathering logs for kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8] ...
	I1002 22:02:33.170659 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:33.223587 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:33.223620 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:33.311340 1160029 logs.go:123] Gathering logs for kube-controller-manager [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a] ...
	I1002 22:02:33.311374 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:33.359196 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:33.359224 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:33.394548 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:33.394578 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:33.439206 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:33.439235 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:36.014453 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:41.015003 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:02:41.015056 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:41.015117 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:41.058793 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:41.058813 1160029 cri.go:89] found id: "e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:41.058819 1160029 cri.go:89] found id: ""
	I1002 22:02:41.058826 1160029 logs.go:284] 2 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]
	I1002 22:02:41.058882 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.063438 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.068095 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:41.068166 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:41.108922 1160029 cri.go:89] found id: ""
	I1002 22:02:41.108946 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.108955 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:41.108962 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:41.109024 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:41.151402 1160029 cri.go:89] found id: ""
	I1002 22:02:41.151504 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.151518 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:41.151525 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:41.151613 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:41.195718 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:41.195740 1160029 cri.go:89] found id: ""
	I1002 22:02:41.195748 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:41.195805 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.200327 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:41.200396 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:41.242717 1160029 cri.go:89] found id: ""
	I1002 22:02:41.242739 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.242747 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:41.242755 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:41.242816 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:41.285706 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:41.285728 1160029 cri.go:89] found id: "a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:41.285733 1160029 cri.go:89] found id: ""
	I1002 22:02:41.285741 1160029 logs.go:284] 2 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a]
	I1002 22:02:41.285800 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.290309 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:41.294926 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:41.295001 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:41.336676 1160029 cri.go:89] found id: ""
	I1002 22:02:41.336699 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.336707 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:41.336714 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:41.336771 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:41.387267 1160029 cri.go:89] found id: ""
	I1002 22:02:41.387331 1160029 logs.go:284] 0 containers: []
	W1002 22:02:41.387353 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:41.387385 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:41.387422 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:41.454773 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:41.454810 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:41.475954 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:41.475983 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:02:51.555076 1160029 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.079069792s)
	W1002 22:02:51.555116 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 22:02:51.555125 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:02:51.555135 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:51.601472 1160029 logs.go:123] Gathering logs for kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8] ...
	I1002 22:02:51.601501 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:51.668460 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:02:51.668491 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:51.710692 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:51.710727 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:51.788864 1160029 logs.go:123] Gathering logs for kube-controller-manager [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a] ...
	I1002 22:02:51.788902 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:51.835331 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:51.835357 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:51.878019 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:51.878054 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:54.436276 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:55.510942 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:50108->192.168.76.2:8443: read: connection reset by peer
	I1002 22:02:55.511001 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:55.511066 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:55.575926 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:55.575950 1160029 cri.go:89] found id: "e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	I1002 22:02:55.575956 1160029 cri.go:89] found id: ""
	I1002 22:02:55.575965 1160029 logs.go:284] 2 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]
	I1002 22:02:55.576023 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.580614 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.584964 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:55.585040 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:55.625866 1160029 cri.go:89] found id: ""
	I1002 22:02:55.625888 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.625897 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:55.625903 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:55.625960 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:55.667979 1160029 cri.go:89] found id: ""
	I1002 22:02:55.668005 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.668014 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:55.668021 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:55.668087 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:55.713876 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:55.713896 1160029 cri.go:89] found id: ""
	I1002 22:02:55.713904 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:55.713959 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.718633 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:55.718706 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:55.760406 1160029 cri.go:89] found id: ""
	I1002 22:02:55.760432 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.760440 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:55.760447 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:55.760504 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:55.805402 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:55.805422 1160029 cri.go:89] found id: "a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:55.805428 1160029 cri.go:89] found id: ""
	I1002 22:02:55.805436 1160029 logs.go:284] 2 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a]
	I1002 22:02:55.805493 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.809917 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:55.814240 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:55.814311 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:55.879039 1160029 cri.go:89] found id: ""
	I1002 22:02:55.879062 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.879070 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:55.879077 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:55.879133 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:55.925703 1160029 cri.go:89] found id: ""
	I1002 22:02:55.925725 1160029 logs.go:284] 0 containers: []
	W1002 22:02:55.925733 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:55.925746 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:55.925758 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:56.007018 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:56.007045 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:02:56.007059 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:56.065577 1160029 logs.go:123] Gathering logs for kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8] ...
	I1002 22:02:56.065609 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	W1002 22:02:56.108911 1160029 logs.go:130] failed kube-apiserver [e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8": Process exited with status 1
	stdout:
	
	stderr:
	E1002 22:02:56.105489    1494 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist" containerID="e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	time="2023-10-02T22:02:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1002 22:02:56.105489    1494 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist" containerID="e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8"
	time="2023-10-02T22:02:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8\": container with ID starting with e2e38d51fcd8808650647a6a934f24c0e58201b0d4791741f867465979211db8 not found: ID does not exist"
	
	** /stderr **
	I1002 22:02:56.108981 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:56.109022 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:56.188969 1160029 logs.go:123] Gathering logs for kube-controller-manager [a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a] ...
	I1002 22:02:56.189003 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1b77ee55edf8a8592ea78c04e453b472f54993718f23f5ef8133606fb091c3a"
	I1002 22:02:56.242703 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:56.242735 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:56.284719 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:56.284755 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:56.359209 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:56.359245 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:56.381512 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:56.381540 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:02:56.432098 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:02:56.432126 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:58.977721 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:02:58.978127 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:02:58.978174 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:02:58.978231 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:02:59.021371 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:59.021394 1160029 cri.go:89] found id: ""
	I1002 22:02:59.021403 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:02:59.021465 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:59.026300 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:02:59.026378 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:02:59.070099 1160029 cri.go:89] found id: ""
	I1002 22:02:59.070122 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.070131 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:02:59.070138 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:02:59.070206 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:02:59.112757 1160029 cri.go:89] found id: ""
	I1002 22:02:59.112779 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.112788 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:02:59.112795 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:02:59.112854 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:02:59.163329 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:59.163349 1160029 cri.go:89] found id: ""
	I1002 22:02:59.163358 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:02:59.163418 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:59.168316 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:02:59.168409 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:02:59.214822 1160029 cri.go:89] found id: ""
	I1002 22:02:59.214847 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.214856 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:02:59.214864 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:02:59.214927 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:02:59.257810 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:59.257846 1160029 cri.go:89] found id: ""
	I1002 22:02:59.257854 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:02:59.257911 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:02:59.262389 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:02:59.262467 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:02:59.310187 1160029 cri.go:89] found id: ""
	I1002 22:02:59.310211 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.310219 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:02:59.310233 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:02:59.310295 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:02:59.352789 1160029 cri.go:89] found id: ""
	I1002 22:02:59.352824 1160029 logs.go:284] 0 containers: []
	W1002 22:02:59.352833 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:02:59.352843 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:02:59.352855 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:02:59.427976 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:02:59.428013 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:02:59.450020 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:02:59.450048 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:02:59.529347 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:02:59.529371 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:02:59.529383 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:02:59.580628 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:02:59.580660 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:02:59.662193 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:02:59.662230 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:02:59.716307 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:02:59.716337 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:02:59.758322 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:02:59.758357 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:02.318518 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:02.318904 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:02.318955 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:02.319015 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:02.376004 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:02.376029 1160029 cri.go:89] found id: ""
	I1002 22:03:02.376038 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:02.376093 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:02.381736 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:02.381830 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:02.449278 1160029 cri.go:89] found id: ""
	I1002 22:03:02.449312 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.449321 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:02.449328 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:02.449394 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:02.504469 1160029 cri.go:89] found id: ""
	I1002 22:03:02.504498 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.504507 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:02.504517 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:02.504574 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:02.552028 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:02.552049 1160029 cri.go:89] found id: ""
	I1002 22:03:02.552057 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:02.552115 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:02.556727 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:02.556802 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:02.600504 1160029 cri.go:89] found id: ""
	I1002 22:03:02.600525 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.600533 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:02.600539 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:02.600596 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:02.642187 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:02.642212 1160029 cri.go:89] found id: ""
	I1002 22:03:02.642221 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:02.642278 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:02.646793 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:02.646864 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:02.690884 1160029 cri.go:89] found id: ""
	I1002 22:03:02.690965 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.691000 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:02.691048 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:02.691149 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:02.740024 1160029 cri.go:89] found id: ""
	I1002 22:03:02.740051 1160029 logs.go:284] 0 containers: []
	W1002 22:03:02.740059 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:02.740068 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:02.740080 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:02.780747 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:02.780781 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:02.846401 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:02.846431 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:02.929960 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:02.929997 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:02.951905 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:02.951933 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:03.047470 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:03.047551 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:03.047625 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:03.098330 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:03.098364 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:03.184312 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:03.184350 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:05.727550 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:05.727963 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:05.728010 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:05.728063 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:05.769757 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:05.769781 1160029 cri.go:89] found id: ""
	I1002 22:03:05.769790 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:05.769885 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:05.774314 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:05.774388 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:05.820319 1160029 cri.go:89] found id: ""
	I1002 22:03:05.820344 1160029 logs.go:284] 0 containers: []
	W1002 22:03:05.820353 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:05.820359 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:05.820417 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:05.863605 1160029 cri.go:89] found id: ""
	I1002 22:03:05.863627 1160029 logs.go:284] 0 containers: []
	W1002 22:03:05.863635 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:05.863641 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:05.863700 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:05.908334 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:05.908403 1160029 cri.go:89] found id: ""
	I1002 22:03:05.908426 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:05.908508 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:05.913046 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:05.913172 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:05.959923 1160029 cri.go:89] found id: ""
	I1002 22:03:05.959947 1160029 logs.go:284] 0 containers: []
	W1002 22:03:05.959954 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:05.959961 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:05.960021 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:06.007960 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:06.008036 1160029 cri.go:89] found id: ""
	I1002 22:03:06.008059 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:06.008156 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:06.013478 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:06.013608 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:06.063502 1160029 cri.go:89] found id: ""
	I1002 22:03:06.063579 1160029 logs.go:284] 0 containers: []
	W1002 22:03:06.063600 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:06.063609 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:06.063675 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:06.107239 1160029 cri.go:89] found id: ""
	I1002 22:03:06.107313 1160029 logs.go:284] 0 containers: []
	W1002 22:03:06.107329 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:06.107340 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:06.107354 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:06.215936 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:06.215976 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:06.264679 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:06.264707 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:06.306333 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:06.306368 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:06.371592 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:06.371618 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:06.458994 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:06.459031 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:06.479722 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:06.479750 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:06.555580 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:06.555671 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:06.555691 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:09.103579 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:09.104048 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:09.104105 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:09.104164 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:09.148452 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:09.148471 1160029 cri.go:89] found id: ""
	I1002 22:03:09.148480 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:09.148545 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:09.153009 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:09.153081 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:09.198118 1160029 cri.go:89] found id: ""
	I1002 22:03:09.198143 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.198151 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:09.198157 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:09.198218 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:09.243593 1160029 cri.go:89] found id: ""
	I1002 22:03:09.243617 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.243626 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:09.243633 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:09.243692 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:09.286247 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:09.286270 1160029 cri.go:89] found id: ""
	I1002 22:03:09.286279 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:09.286335 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:09.290767 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:09.290831 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:09.342522 1160029 cri.go:89] found id: ""
	I1002 22:03:09.342542 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.342550 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:09.342557 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:09.342628 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:09.389420 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:09.389446 1160029 cri.go:89] found id: ""
	I1002 22:03:09.389466 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:09.389526 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:09.394221 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:09.394296 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:09.435443 1160029 cri.go:89] found id: ""
	I1002 22:03:09.435471 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.435480 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:09.435487 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:09.435549 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:09.477321 1160029 cri.go:89] found id: ""
	I1002 22:03:09.477342 1160029 logs.go:284] 0 containers: []
	W1002 22:03:09.477350 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:09.477360 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:09.477372 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:09.556629 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:09.556693 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:09.556720 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:09.607409 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:09.607442 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:09.699143 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:09.699180 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:09.745122 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:09.745240 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:09.791175 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:09.791212 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:09.842844 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:09.842872 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:09.922433 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:09.922467 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:12.443917 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:12.444351 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:12.444394 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:12.444450 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:12.488011 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:12.488077 1160029 cri.go:89] found id: ""
	I1002 22:03:12.488093 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:12.488157 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:12.493404 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:12.493475 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:12.540848 1160029 cri.go:89] found id: ""
	I1002 22:03:12.540873 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.540882 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:12.540889 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:12.540950 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:12.585898 1160029 cri.go:89] found id: ""
	I1002 22:03:12.585922 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.585930 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:12.585937 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:12.585998 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:12.627491 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:12.627513 1160029 cri.go:89] found id: ""
	I1002 22:03:12.627521 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:12.627579 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:12.631945 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:12.632013 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:12.674981 1160029 cri.go:89] found id: ""
	I1002 22:03:12.675004 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.675013 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:12.675020 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:12.675085 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:12.718776 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:12.718839 1160029 cri.go:89] found id: ""
	I1002 22:03:12.718861 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:12.718943 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:12.723424 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:12.723517 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:12.767007 1160029 cri.go:89] found id: ""
	I1002 22:03:12.767032 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.767040 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:12.767047 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:12.767141 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:12.809851 1160029 cri.go:89] found id: ""
	I1002 22:03:12.809874 1160029 logs.go:284] 0 containers: []
	W1002 22:03:12.809882 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:12.809892 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:12.809905 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:12.897393 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:12.897433 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:12.946867 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:12.946893 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:12.988547 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:12.988582 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:13.060855 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:13.060882 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:13.143364 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:13.143397 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:13.166073 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:13.166115 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:13.265727 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:13.265915 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:13.265937 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:15.831779 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:15.832161 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:15.832203 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:15.832258 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:15.873377 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:15.873399 1160029 cri.go:89] found id: ""
	I1002 22:03:15.873406 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:15.873471 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:15.878081 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:15.878153 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:15.919344 1160029 cri.go:89] found id: ""
	I1002 22:03:15.919365 1160029 logs.go:284] 0 containers: []
	W1002 22:03:15.919375 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:15.919382 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:15.919440 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:15.961803 1160029 cri.go:89] found id: ""
	I1002 22:03:15.961831 1160029 logs.go:284] 0 containers: []
	W1002 22:03:15.961839 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:15.961846 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:15.961908 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:16.019297 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:16.019317 1160029 cri.go:89] found id: ""
	I1002 22:03:16.019325 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:16.019382 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:16.024464 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:16.024540 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:16.077595 1160029 cri.go:89] found id: ""
	I1002 22:03:16.077617 1160029 logs.go:284] 0 containers: []
	W1002 22:03:16.077626 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:16.077632 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:16.077692 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:16.126530 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:16.126549 1160029 cri.go:89] found id: ""
	I1002 22:03:16.126558 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:16.126615 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:16.131447 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:16.131551 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:16.181511 1160029 cri.go:89] found id: ""
	I1002 22:03:16.181535 1160029 logs.go:284] 0 containers: []
	W1002 22:03:16.181543 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:16.181550 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:16.181610 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:16.226997 1160029 cri.go:89] found id: ""
	I1002 22:03:16.227019 1160029 logs.go:284] 0 containers: []
	W1002 22:03:16.227026 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:16.227036 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:16.227049 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:16.312560 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:16.312635 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:16.335913 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:16.336084 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:16.435681 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:16.435741 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:16.435763 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:16.492214 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:16.492244 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:16.581708 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:16.581745 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:16.628395 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:16.628422 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:16.668235 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:16.668268 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:19.232534 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:19.232959 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:19.233055 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:19.233134 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:19.276904 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:19.276930 1160029 cri.go:89] found id: ""
	I1002 22:03:19.276938 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:19.277022 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:19.281881 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:19.281961 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:19.338966 1160029 cri.go:89] found id: ""
	I1002 22:03:19.338989 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.338998 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:19.339004 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:19.339089 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:19.384663 1160029 cri.go:89] found id: ""
	I1002 22:03:19.384685 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.384694 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:19.384701 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:19.384759 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:19.430728 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:19.430749 1160029 cri.go:89] found id: ""
	I1002 22:03:19.430757 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:19.430818 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:19.435608 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:19.435694 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:19.478395 1160029 cri.go:89] found id: ""
	I1002 22:03:19.478419 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.478427 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:19.478434 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:19.478492 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:19.525986 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:19.526006 1160029 cri.go:89] found id: ""
	I1002 22:03:19.526014 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:19.526073 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:19.530801 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:19.530878 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:19.580353 1160029 cri.go:89] found id: ""
	I1002 22:03:19.580378 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.580388 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:19.580394 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:19.580455 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:19.634138 1160029 cri.go:89] found id: ""
	I1002 22:03:19.634162 1160029 logs.go:284] 0 containers: []
	W1002 22:03:19.634172 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:19.634181 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:19.634194 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:19.722176 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:19.722214 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:19.746252 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:19.746287 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:19.827587 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:19.827610 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:19.827624 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:19.881152 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:19.881182 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:19.966232 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:19.966272 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:20.022456 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:20.022487 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:20.064541 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:20.064577 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:22.617974 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:22.618399 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:22.618453 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:22.618510 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:22.664330 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:22.664353 1160029 cri.go:89] found id: ""
	I1002 22:03:22.664361 1160029 logs.go:284] 1 containers: [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:22.664425 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:22.669546 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:22.669619 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:22.718600 1160029 cri.go:89] found id: ""
	I1002 22:03:22.718621 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.718630 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:22.718636 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:22.718694 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:22.762212 1160029 cri.go:89] found id: ""
	I1002 22:03:22.762234 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.762242 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:22.762250 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:22.762319 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:22.809833 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:22.809856 1160029 cri.go:89] found id: ""
	I1002 22:03:22.809864 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:22.809921 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:22.814532 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:22.814651 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:22.862158 1160029 cri.go:89] found id: ""
	I1002 22:03:22.862234 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.862256 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:22.862280 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:22.862364 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:22.916691 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:22.916751 1160029 cri.go:89] found id: ""
	I1002 22:03:22.916773 1160029 logs.go:284] 1 containers: [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:22.916850 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:22.921824 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:22.921942 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:22.963132 1160029 cri.go:89] found id: ""
	I1002 22:03:22.963196 1160029 logs.go:284] 0 containers: []
	W1002 22:03:22.963218 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:22.963232 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:22.963306 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:23.012720 1160029 cri.go:89] found id: ""
	I1002 22:03:23.012797 1160029 logs.go:284] 0 containers: []
	W1002 22:03:23.012821 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:23.012861 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:23.012893 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:23.034054 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:23.034085 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:23.119233 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:23.119270 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:23.119282 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:23.165862 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:23.165891 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:23.252596 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:23.252634 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:23.318705 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:23.318775 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:23.361931 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:23.362018 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:23.425339 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:23.425370 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:26.019592 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:27.188989 1152871 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (54.985964181s)
	I1002 22:03:27.194824 1152871 logs.go:123] Gathering logs for etcd [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959] ...
	I1002 22:03:27.194929 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959"
	I1002 22:03:27.284201 1152871 logs.go:123] Gathering logs for coredns [1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60] ...
	I1002 22:03:27.284326 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	I1002 22:03:27.375529 1152871 logs.go:123] Gathering logs for container status ...
	I1002 22:03:27.375637 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:27.474199 1152871 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:27.474296 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:27.706712 1152871 logs.go:123] Gathering logs for coredns [1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794] ...
	I1002 22:03:27.706817 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794"
	I1002 22:03:27.809941 1152871 logs.go:123] Gathering logs for kube-proxy [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66] ...
	I1002 22:03:27.809974 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66"
	I1002 22:03:27.981792 1152871 logs.go:123] Gathering logs for kube-controller-manager [8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec] ...
	I1002 22:03:27.981887 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec"
	I1002 22:03:30.618338 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:03:30.627191 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 22:03:30.642455 1152871 api_server.go:141] control plane version: v1.28.2
	I1002 22:03:30.642500 1152871 api_server.go:131] duration metric: took 3m4.381557151s to wait for apiserver health ...
	I1002 22:03:30.642511 1152871 cni.go:84] Creating CNI manager for ""
	I1002 22:03:30.642518 1152871 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:03:30.644638 1152871 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1002 22:03:31.019894 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 22:03:31.019946 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:31.020013 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:31.082766 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:31.082805 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:31.082811 1160029 cri.go:89] found id: ""
	I1002 22:03:31.082819 1160029 logs.go:284] 2 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:31.082875 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.088254 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.093420 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:31.093490 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:31.150297 1160029 cri.go:89] found id: ""
	I1002 22:03:31.150318 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.150326 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:31.150332 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:31.150390 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:31.208396 1160029 cri.go:89] found id: ""
	I1002 22:03:31.208417 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.208425 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:31.208432 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:31.208490 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:31.272890 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:31.272908 1160029 cri.go:89] found id: ""
	I1002 22:03:31.272916 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:31.272975 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.278087 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:31.278156 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:31.339661 1160029 cri.go:89] found id: ""
	I1002 22:03:31.339682 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.339690 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:31.339697 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:31.339754 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:31.411914 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:31.411934 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:31.411939 1160029 cri.go:89] found id: ""
	I1002 22:03:31.411947 1160029 logs.go:284] 2 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:31.412011 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.417396 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:31.422271 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:31.422336 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:31.477753 1160029 cri.go:89] found id: ""
	I1002 22:03:31.477775 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.477783 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:31.477793 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:31.477863 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:31.537959 1160029 cri.go:89] found id: ""
	I1002 22:03:31.537980 1160029 logs.go:284] 0 containers: []
	W1002 22:03:31.537994 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:31.538009 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:31.538022 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:03:30.646855 1152871 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 22:03:30.652347 1152871 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1002 22:03:30.652369 1152871 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1002 22:03:30.674209 1152871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 22:03:41.628403 1160029 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.090357852s)
	W1002 22:03:41.628446 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1002 22:03:41.628455 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:41.628467 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:41.693793 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:41.693879 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:41.788206 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:41.788247 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:41.837235 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:41.837262 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:41.903821 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:41.903851 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:41.992775 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:41.992811 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:42.030778 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:42.030862 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:42.093641 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:42.093732 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:42.148134 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:42.148162 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:44.697142 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:47.133012 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:46326->192.168.76.2:8443: read: connection reset by peer
	I1002 22:03:47.133065 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:47.133142 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:47.184804 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:47.184824 1160029 cri.go:89] found id: "37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:47.184830 1160029 cri.go:89] found id: ""
	I1002 22:03:47.184838 1160029 logs.go:284] 2 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4]
	I1002 22:03:47.184892 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.189562 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.193804 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:47.193879 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:47.237882 1160029 cri.go:89] found id: ""
	I1002 22:03:47.237905 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.237914 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:47.237921 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:47.237984 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:47.283549 1160029 cri.go:89] found id: ""
	I1002 22:03:47.283572 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.283581 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:47.283588 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:47.283649 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:45.268101 1152871 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (14.593827569s)
	I1002 22:03:45.268152 1152871 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:03:45.283919 1152871 system_pods.go:59] 8 kube-system pods found
	I1002 22:03:45.283962 1152871 system_pods.go:61] "coredns-5dd5756b68-cm5nm" [18849f27-d4fc-44c4-b9e0-ec7b818e9c76] Running
	I1002 22:03:45.283969 1152871 system_pods.go:61] "coredns-5dd5756b68-t6nc4" [75d35777-c673-4733-aa3b-957c2358719b] Running
	I1002 22:03:45.283975 1152871 system_pods.go:61] "etcd-pause-050274" [cbf7d6f7-1d04-4d76-98b0-76204d0bd925] Running
	I1002 22:03:45.284018 1152871 system_pods.go:61] "kindnet-ztnzr" [ececf515-ef4b-4b91-9456-6530f0dcf4c0] Running
	I1002 22:03:45.284025 1152871 system_pods.go:61] "kube-apiserver-pause-050274" [7d042ae0-0418-4e40-b874-e2fffa8e7786] Running
	I1002 22:03:45.284036 1152871 system_pods.go:61] "kube-controller-manager-pause-050274" [928688d0-f5bf-421a-b0d7-c3069a59ebb2] Running
	I1002 22:03:45.284041 1152871 system_pods.go:61] "kube-proxy-pqzpr" [434448cf-f6fd-45df-a10e-be64371b993e] Running
	I1002 22:03:45.284050 1152871 system_pods.go:61] "kube-scheduler-pause-050274" [22f7c3fc-10e8-4a56-8317-050abd85895d] Running
	I1002 22:03:45.284056 1152871 system_pods.go:74] duration metric: took 15.896255ms to wait for pod list to return data ...
	I1002 22:03:45.284079 1152871 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:03:45.288036 1152871 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:03:45.288073 1152871 node_conditions.go:123] node cpu capacity is 2
	I1002 22:03:45.288087 1152871 node_conditions.go:105] duration metric: took 4.000408ms to run NodePressure ...
	I1002 22:03:45.288128 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 22:03:45.539618 1152871 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1002 22:03:45.544743 1152871 retry.go:31] will retry after 155.848612ms: kubelet not initialised
	I1002 22:03:45.706307 1152871 retry.go:31] will retry after 547.400392ms: kubelet not initialised
	I1002 22:03:46.260451 1152871 retry.go:31] will retry after 612.220756ms: kubelet not initialised
	I1002 22:03:46.879309 1152871 retry.go:31] will retry after 1.197216323s: kubelet not initialised
	I1002 22:03:48.087011 1152871 retry.go:31] will retry after 1.520294818s: kubelet not initialised
	I1002 22:03:49.613680 1152871 retry.go:31] will retry after 2.067754829s: kubelet not initialised
	I1002 22:03:47.326587 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:47.326610 1160029 cri.go:89] found id: ""
	I1002 22:03:47.326619 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:47.326675 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.332278 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:47.332353 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:47.375762 1160029 cri.go:89] found id: ""
	I1002 22:03:47.375783 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.375791 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:47.375798 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:47.375854 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:47.419014 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:47.419034 1160029 cri.go:89] found id: "ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:47.419040 1160029 cri.go:89] found id: ""
	I1002 22:03:47.419048 1160029 logs.go:284] 2 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4]
	I1002 22:03:47.419102 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.423907 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:47.428154 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:47.428224 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:47.477597 1160029 cri.go:89] found id: ""
	I1002 22:03:47.477619 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.477627 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:47.477634 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:47.477697 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:47.522112 1160029 cri.go:89] found id: ""
	I1002 22:03:47.522148 1160029 logs.go:284] 0 containers: []
	W1002 22:03:47.522157 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:47.522170 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:47.522187 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:47.571497 1160029 logs.go:123] Gathering logs for kube-apiserver [37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4] ...
	I1002 22:03:47.571532 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37a1d6d5e2cfc53e8bdffa6fc4ad4495fc6bc764ad01b43da85c689cef1e4ae4"
	I1002 22:03:47.639710 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:47.639741 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:47.713903 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:47.713926 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:47.713940 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:47.803697 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:47.803747 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:47.855030 1160029 logs.go:123] Gathering logs for kube-controller-manager [ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4] ...
	I1002 22:03:47.855059 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce3cc9524aa1c70a0a0210a3b15d85a60631fbfb8d5109539b1bb17a2b3b08d4"
	I1002 22:03:47.900506 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:47.900536 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:47.946964 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:47.947001 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:48.015925 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:48.015957 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:48.111318 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:48.111353 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:50.635629 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:50.636075 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:50.636144 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:50.636221 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:50.686873 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:50.686895 1160029 cri.go:89] found id: ""
	I1002 22:03:50.686904 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:03:50.686961 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:50.691629 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:50.691701 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:50.734477 1160029 cri.go:89] found id: ""
	I1002 22:03:50.734503 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.734512 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:50.734519 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:50.734587 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:50.776499 1160029 cri.go:89] found id: ""
	I1002 22:03:50.776527 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.776536 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:50.776543 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:50.776604 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:50.823031 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:50.823056 1160029 cri.go:89] found id: ""
	I1002 22:03:50.823064 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:50.823120 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:50.827608 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:50.827677 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:50.870861 1160029 cri.go:89] found id: ""
	I1002 22:03:50.870883 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.870891 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:50.870897 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:50.870957 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:50.913624 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:50.913646 1160029 cri.go:89] found id: ""
	I1002 22:03:50.913655 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:03:50.913713 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:50.918305 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:50.918374 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:50.962684 1160029 cri.go:89] found id: ""
	I1002 22:03:50.962707 1160029 logs.go:284] 0 containers: []
	W1002 22:03:50.962715 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:50.962722 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:50.962780 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:51.012692 1160029 cri.go:89] found id: ""
	I1002 22:03:51.012722 1160029 logs.go:284] 0 containers: []
	W1002 22:03:51.012731 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:51.012741 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:51.012754 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:51.111533 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:51.111570 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:51.133954 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:51.133986 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:51.209135 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:51.209156 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:51.209169 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:51.281338 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:51.281366 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:51.396997 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:51.397033 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:51.442241 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:51.442268 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:51.489786 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:51.489826 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:51.687895 1152871 retry.go:31] will retry after 3.545961405s: kubelet not initialised
	I1002 22:03:54.045844 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:54.046303 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:54.046359 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:54.046421 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:54.090932 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:54.090958 1160029 cri.go:89] found id: ""
	I1002 22:03:54.090967 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:03:54.091026 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:54.096357 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:54.096431 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:54.142507 1160029 cri.go:89] found id: ""
	I1002 22:03:54.142531 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.142539 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:54.142546 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:54.142611 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:54.187424 1160029 cri.go:89] found id: ""
	I1002 22:03:54.187445 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.187454 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:54.187461 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:54.187522 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:54.229971 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:54.229992 1160029 cri.go:89] found id: ""
	I1002 22:03:54.230001 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:54.230057 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:54.235809 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:54.235891 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:54.279621 1160029 cri.go:89] found id: ""
	I1002 22:03:54.279643 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.279652 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:54.279658 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:54.279718 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:54.326775 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:54.326796 1160029 cri.go:89] found id: ""
	I1002 22:03:54.326805 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:03:54.326868 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:54.331502 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:54.331588 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:54.380370 1160029 cri.go:89] found id: ""
	I1002 22:03:54.380391 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.380399 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:54.380405 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:54.380461 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:54.424961 1160029 cri.go:89] found id: ""
	I1002 22:03:54.425033 1160029 logs.go:284] 0 containers: []
	W1002 22:03:54.425049 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:54.425060 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:54.425072 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:54.450635 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:54.450661 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:54.532991 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:54.533011 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:54.533027 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:54.585650 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:54.585680 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:54.680115 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:54.680149 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:54.726722 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:54.726750 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:54.771800 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:54.771833 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:54.823860 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:54.823892 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:55.239869 1152871 retry.go:31] will retry after 6.03497621s: kubelet not initialised
	I1002 22:03:57.425743 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:03:57.426253 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:03:57.426305 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:03:57.426372 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:03:57.472723 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:57.472749 1160029 cri.go:89] found id: ""
	I1002 22:03:57.472758 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:03:57.472824 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:57.477768 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:03:57.477838 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:03:57.524289 1160029 cri.go:89] found id: ""
	I1002 22:03:57.524316 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.524346 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:03:57.524357 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:03:57.524428 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:03:57.568739 1160029 cri.go:89] found id: ""
	I1002 22:03:57.568760 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.568768 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:03:57.568776 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:03:57.568834 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:03:57.615328 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:03:57.615349 1160029 cri.go:89] found id: ""
	I1002 22:03:57.615357 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:03:57.615413 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:57.620440 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:03:57.620516 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:03:57.668585 1160029 cri.go:89] found id: ""
	I1002 22:03:57.668606 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.668614 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:03:57.668626 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:03:57.668685 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:03:57.712177 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:57.712207 1160029 cri.go:89] found id: ""
	I1002 22:03:57.712220 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:03:57.712295 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:03:57.716907 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:03:57.716981 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:03:57.759224 1160029 cri.go:89] found id: ""
	I1002 22:03:57.759248 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.759256 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:03:57.759263 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:03:57.759321 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:03:57.808285 1160029 cri.go:89] found id: ""
	I1002 22:03:57.808311 1160029 logs.go:284] 0 containers: []
	W1002 22:03:57.808320 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:03:57.808330 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:03:57.808343 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:03:57.853564 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:03:57.853591 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:03:57.902454 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:03:57.902487 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:03:57.953289 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:03:57.953316 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:03:58.057953 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:03:58.057990 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:03:58.080225 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:03:58.080253 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:03:58.167505 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:03:58.167589 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:03:58.167612 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:03:58.218639 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:03:58.218672 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:00.854468 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:00.854981 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:00.855028 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:00.855099 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:00.899290 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:00.899314 1160029 cri.go:89] found id: ""
	I1002 22:04:00.899323 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:00.899394 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:00.904164 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:00.904263 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:00.949598 1160029 cri.go:89] found id: ""
	I1002 22:04:00.949621 1160029 logs.go:284] 0 containers: []
	W1002 22:04:00.949630 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:00.949636 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:00.949710 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:00.995627 1160029 cri.go:89] found id: ""
	I1002 22:04:00.995655 1160029 logs.go:284] 0 containers: []
	W1002 22:04:00.995664 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:00.995671 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:00.995730 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:01.043414 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:01.043436 1160029 cri.go:89] found id: ""
	I1002 22:04:01.043445 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:01.043503 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:01.048244 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:01.048319 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:01.094544 1160029 cri.go:89] found id: ""
	I1002 22:04:01.094633 1160029 logs.go:284] 0 containers: []
	W1002 22:04:01.094657 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:01.094670 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:01.094757 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:01.143846 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:01.143921 1160029 cri.go:89] found id: ""
	I1002 22:04:01.143962 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:01.144041 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:01.149241 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:01.149318 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:01.202316 1160029 cri.go:89] found id: ""
	I1002 22:04:01.202365 1160029 logs.go:284] 0 containers: []
	W1002 22:04:01.202377 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:01.202384 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:01.202464 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:01.250206 1160029 cri.go:89] found id: ""
	I1002 22:04:01.250241 1160029 logs.go:284] 0 containers: []
	W1002 22:04:01.250251 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:01.250262 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:01.250275 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:01.354668 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:01.354699 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:01.376069 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:01.376101 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:01.460001 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:01.460034 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:01.460046 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:01.526093 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:01.526127 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:01.627493 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:01.627545 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:01.679757 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:01.679786 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:01.730181 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:01.730215 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:01.280477 1152871 retry.go:31] will retry after 9.468766097s: kubelet not initialised
	I1002 22:04:04.289888 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:04.290287 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:04.290330 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:04.290403 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:04.334127 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:04.334147 1160029 cri.go:89] found id: ""
	I1002 22:04:04.334156 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:04.334210 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:04.338814 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:04.338898 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:04.380939 1160029 cri.go:89] found id: ""
	I1002 22:04:04.380968 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.380980 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:04.380995 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:04.381076 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:04.425955 1160029 cri.go:89] found id: ""
	I1002 22:04:04.425980 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.425994 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:04.426002 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:04.426060 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:04.473948 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:04.473969 1160029 cri.go:89] found id: ""
	I1002 22:04:04.473977 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:04.474033 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:04.478317 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:04.478390 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:04.521738 1160029 cri.go:89] found id: ""
	I1002 22:04:04.521809 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.521831 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:04.521853 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:04.521992 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:04.567461 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:04.567482 1160029 cri.go:89] found id: ""
	I1002 22:04:04.567490 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:04.567564 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:04.572754 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:04.572841 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:04.617527 1160029 cri.go:89] found id: ""
	I1002 22:04:04.617560 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.617570 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:04.617576 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:04.617645 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:04.660214 1160029 cri.go:89] found id: ""
	I1002 22:04:04.660241 1160029 logs.go:284] 0 containers: []
	W1002 22:04:04.660249 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:04.660259 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:04.660274 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:04.773307 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:04.773342 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:04.820145 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:04.820174 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:04.867703 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:04.867736 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:04.929458 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:04.929485 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:05.034976 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:05.035017 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:05.057328 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:05.057359 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:05.137147 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:05.137168 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:05.137183 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:07.700017 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:07.700421 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:07.700475 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:07.700539 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:07.742898 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:07.742918 1160029 cri.go:89] found id: ""
	I1002 22:04:07.742927 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:07.742983 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:07.747593 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:07.747663 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:07.794300 1160029 cri.go:89] found id: ""
	I1002 22:04:07.794322 1160029 logs.go:284] 0 containers: []
	W1002 22:04:07.794330 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:07.794336 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:07.794394 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:07.835326 1160029 cri.go:89] found id: ""
	I1002 22:04:07.835354 1160029 logs.go:284] 0 containers: []
	W1002 22:04:07.835363 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:07.835370 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:07.835431 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:07.879004 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:07.879030 1160029 cri.go:89] found id: ""
	I1002 22:04:07.879039 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:07.879094 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:07.883476 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:07.883544 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:07.924164 1160029 cri.go:89] found id: ""
	I1002 22:04:07.924190 1160029 logs.go:284] 0 containers: []
	W1002 22:04:07.924198 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:07.924204 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:07.924259 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:07.967096 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:07.967116 1160029 cri.go:89] found id: ""
	I1002 22:04:07.967124 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:07.967178 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:07.971629 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:07.971695 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:08.020843 1160029 cri.go:89] found id: ""
	I1002 22:04:08.020866 1160029 logs.go:284] 0 containers: []
	W1002 22:04:08.020874 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:08.020881 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:08.020943 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:08.071242 1160029 cri.go:89] found id: ""
	I1002 22:04:08.071268 1160029 logs.go:284] 0 containers: []
	W1002 22:04:08.071289 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:08.071300 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:08.071316 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:08.183478 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:08.183556 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:08.230185 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:08.230219 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:08.278947 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:08.278982 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:08.326401 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:08.326429 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:08.436602 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:08.436644 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:08.458743 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:08.458774 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:08.538090 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:08.538166 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:08.538189 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:11.092147 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:11.092549 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:11.092599 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:11.092656 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:11.138396 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:11.138419 1160029 cri.go:89] found id: ""
	I1002 22:04:11.138429 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:11.138492 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:11.143105 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:11.143176 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:11.191124 1160029 cri.go:89] found id: ""
	I1002 22:04:11.191146 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.191155 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:11.191161 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:11.191221 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:11.238479 1160029 cri.go:89] found id: ""
	I1002 22:04:11.238502 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.238511 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:11.238517 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:11.238582 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:11.290364 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:11.290384 1160029 cri.go:89] found id: ""
	I1002 22:04:11.290392 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:11.290453 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:11.295107 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:11.295181 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:11.338167 1160029 cri.go:89] found id: ""
	I1002 22:04:11.338189 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.338197 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:11.338204 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:11.338273 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:11.385641 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:11.385663 1160029 cri.go:89] found id: ""
	I1002 22:04:11.385671 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:11.385733 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:11.390692 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:11.390763 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:11.436492 1160029 cri.go:89] found id: ""
	I1002 22:04:11.436517 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.436525 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:11.436532 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:11.436590 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:11.478168 1160029 cri.go:89] found id: ""
	I1002 22:04:11.478192 1160029 logs.go:284] 0 containers: []
	W1002 22:04:11.478201 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:11.478210 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:11.478223 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:11.499582 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:11.499609 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:11.584985 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:11.585007 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:11.585020 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:11.635164 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:11.635198 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:11.740656 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:11.740694 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:11.785401 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:11.785430 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:11.830493 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:11.830530 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:11.881827 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:11.881863 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:10.754626 1152871 retry.go:31] will retry after 13.418516702s: kubelet not initialised
	I1002 22:04:14.493731 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:14.494142 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:14.494187 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:14.494240 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:14.543876 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:14.544232 1160029 cri.go:89] found id: ""
	I1002 22:04:14.544247 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:14.544324 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:14.548931 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:14.549001 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:14.591335 1160029 cri.go:89] found id: ""
	I1002 22:04:14.591402 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.591424 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:14.591439 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:14.591500 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:14.632781 1160029 cri.go:89] found id: ""
	I1002 22:04:14.632804 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.632812 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:14.632819 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:14.632876 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:14.676189 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:14.676212 1160029 cri.go:89] found id: ""
	I1002 22:04:14.676221 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:14.676277 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:14.681167 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:14.681265 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:14.726630 1160029 cri.go:89] found id: ""
	I1002 22:04:14.726655 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.726665 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:14.726672 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:14.726768 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:14.775998 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:14.776020 1160029 cri.go:89] found id: ""
	I1002 22:04:14.776028 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:14.776086 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:14.781008 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:14.781134 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:14.826140 1160029 cri.go:89] found id: ""
	I1002 22:04:14.826164 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.826172 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:14.826179 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:14.826265 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:14.871429 1160029 cri.go:89] found id: ""
	I1002 22:04:14.871497 1160029 logs.go:284] 0 containers: []
	W1002 22:04:14.871520 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:14.871536 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:14.871549 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:14.920304 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:14.920334 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:15.013742 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:15.013789 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:15.076216 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:15.076246 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:15.124476 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:15.124511 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:15.178593 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:15.178622 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:15.290368 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:15.290404 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:15.311963 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:15.311992 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:15.384667 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:17.885630 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:17.886064 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:17.886122 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:17.886187 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:17.930138 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:17.930160 1160029 cri.go:89] found id: ""
	I1002 22:04:17.930171 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:17.930227 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:17.934815 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:17.934923 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:17.979281 1160029 cri.go:89] found id: ""
	I1002 22:04:17.979355 1160029 logs.go:284] 0 containers: []
	W1002 22:04:17.979384 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:17.979399 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:17.979485 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:18.023989 1160029 cri.go:89] found id: ""
	I1002 22:04:18.024079 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.024107 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:18.024120 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:18.024206 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:18.072842 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:18.072919 1160029 cri.go:89] found id: ""
	I1002 22:04:18.072941 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:18.073032 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:18.078244 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:18.078361 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:18.128570 1160029 cri.go:89] found id: ""
	I1002 22:04:18.128598 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.128606 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:18.128613 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:18.128676 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:18.174783 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:18.174856 1160029 cri.go:89] found id: ""
	I1002 22:04:18.174879 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:18.174957 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:18.180222 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:18.180346 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:18.226429 1160029 cri.go:89] found id: ""
	I1002 22:04:18.226456 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.226475 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:18.226484 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:18.226555 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:18.271652 1160029 cri.go:89] found id: ""
	I1002 22:04:18.271728 1160029 logs.go:284] 0 containers: []
	W1002 22:04:18.271742 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:18.271753 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:18.271767 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:18.318377 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:18.318405 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:18.415723 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:18.415761 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:18.462191 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:18.462221 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:18.509075 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:18.509108 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:18.564223 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:18.564249 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:18.680278 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:18.680314 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:18.702505 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:18.702538 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:18.782134 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:21.282529 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:21.282935 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:21.282978 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:21.283031 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:21.326756 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:21.326779 1160029 cri.go:89] found id: ""
	I1002 22:04:21.326788 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:21.326844 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:21.331359 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:21.331427 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:21.374259 1160029 cri.go:89] found id: ""
	I1002 22:04:21.374282 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.374290 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:21.374297 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:21.374353 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:21.415220 1160029 cri.go:89] found id: ""
	I1002 22:04:21.415241 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.415250 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:21.415256 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:21.415313 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:21.458531 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:21.458551 1160029 cri.go:89] found id: ""
	I1002 22:04:21.458560 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:21.458616 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:21.463215 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:21.463289 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:21.514768 1160029 cri.go:89] found id: ""
	I1002 22:04:21.514790 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.514799 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:21.514805 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:21.514864 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:21.556699 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:21.556720 1160029 cri.go:89] found id: ""
	I1002 22:04:21.556728 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:21.556785 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:21.561715 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:21.561784 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:21.603910 1160029 cri.go:89] found id: ""
	I1002 22:04:21.603975 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.603989 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:21.603996 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:21.604059 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:21.649738 1160029 cri.go:89] found id: ""
	I1002 22:04:21.649761 1160029 logs.go:284] 0 containers: []
	W1002 22:04:21.649769 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:21.649779 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:21.649794 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:21.695647 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:21.695680 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:21.744694 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:21.744720 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:21.858676 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:21.858711 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:21.881304 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:21.881336 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:21.960590 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:21.960668 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:21.960712 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:22.007588 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:22.007624 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:22.109950 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:22.109991 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:24.181723 1152871 retry.go:31] will retry after 7.765021344s: kubelet not initialised
	I1002 22:04:24.662592 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:24.663050 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:24.663106 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:24.663164 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:24.710564 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:24.710641 1160029 cri.go:89] found id: ""
	I1002 22:04:24.710665 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:24.710753 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:24.716172 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:24.716260 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:24.762119 1160029 cri.go:89] found id: ""
	I1002 22:04:24.762140 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.762149 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:24.762155 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:24.762216 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:24.804776 1160029 cri.go:89] found id: ""
	I1002 22:04:24.804799 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.804807 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:24.804814 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:24.804871 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:24.847302 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:24.847327 1160029 cri.go:89] found id: ""
	I1002 22:04:24.847335 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:24.847391 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:24.852099 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:24.852189 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:24.898495 1160029 cri.go:89] found id: ""
	I1002 22:04:24.898568 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.898584 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:24.898592 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:24.898654 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:24.949603 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:24.949625 1160029 cri.go:89] found id: ""
	I1002 22:04:24.949633 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:24.949689 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:24.954186 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:24.954258 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:24.995299 1160029 cri.go:89] found id: ""
	I1002 22:04:24.995366 1160029 logs.go:284] 0 containers: []
	W1002 22:04:24.995379 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:24.995387 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:24.995447 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:25.042471 1160029 cri.go:89] found id: ""
	I1002 22:04:25.042539 1160029 logs.go:284] 0 containers: []
	W1002 22:04:25.042554 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:25.042564 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:25.042577 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:25.093890 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:25.093928 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:25.142064 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:25.142093 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:25.258034 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:25.258070 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:25.279855 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:25.279885 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 22:04:25.356782 1160029 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 22:04:25.356804 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:25.356816 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:25.407084 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:25.407115 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:25.506794 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:25.506828 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:28.050802 1160029 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1002 22:04:28.051272 1160029 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1002 22:04:28.051338 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 22:04:28.051407 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 22:04:28.096403 1160029 cri.go:89] found id: "2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:28.096423 1160029 cri.go:89] found id: ""
	I1002 22:04:28.096431 1160029 logs.go:284] 1 containers: [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da]
	I1002 22:04:28.096487 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:28.101277 1160029 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 22:04:28.101350 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 22:04:28.143442 1160029 cri.go:89] found id: ""
	I1002 22:04:28.143469 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.143477 1160029 logs.go:286] No container was found matching "etcd"
	I1002 22:04:28.143483 1160029 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 22:04:28.143544 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 22:04:28.201954 1160029 cri.go:89] found id: ""
	I1002 22:04:28.201977 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.201985 1160029 logs.go:286] No container was found matching "coredns"
	I1002 22:04:28.201991 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 22:04:28.202050 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 22:04:28.257907 1160029 cri.go:89] found id: "84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:28.257971 1160029 cri.go:89] found id: ""
	I1002 22:04:28.258000 1160029 logs.go:284] 1 containers: [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0]
	I1002 22:04:28.258072 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:28.263205 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 22:04:28.263279 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 22:04:28.317042 1160029 cri.go:89] found id: ""
	I1002 22:04:28.317066 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.317075 1160029 logs.go:286] No container was found matching "kube-proxy"
	I1002 22:04:28.317081 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 22:04:28.317155 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 22:04:28.362660 1160029 cri.go:89] found id: "350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:28.362681 1160029 cri.go:89] found id: ""
	I1002 22:04:28.362690 1160029 logs.go:284] 1 containers: [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e]
	I1002 22:04:28.362745 1160029 ssh_runner.go:195] Run: which crictl
	I1002 22:04:28.367234 1160029 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 22:04:28.367301 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 22:04:28.420348 1160029 cri.go:89] found id: ""
	I1002 22:04:28.420415 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.420437 1160029 logs.go:286] No container was found matching "kindnet"
	I1002 22:04:28.420459 1160029 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1002 22:04:28.420547 1160029 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1002 22:04:28.473903 1160029 cri.go:89] found id: ""
	I1002 22:04:28.473927 1160029 logs.go:284] 0 containers: []
	W1002 22:04:28.473936 1160029 logs.go:286] No container was found matching "storage-provisioner"
	I1002 22:04:28.473945 1160029 logs.go:123] Gathering logs for kube-apiserver [2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da] ...
	I1002 22:04:28.473958 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f08801d799d016b2698fe3a1c16d5f9b82dacae9162ad1c4efdb41a2969c8da"
	I1002 22:04:28.533550 1160029 logs.go:123] Gathering logs for kube-scheduler [84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0] ...
	I1002 22:04:28.533581 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84ac9dac95cd80a959972d21838cd47c9d57ade453289d77a40f840c39c909e0"
	I1002 22:04:28.631144 1160029 logs.go:123] Gathering logs for kube-controller-manager [350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e] ...
	I1002 22:04:28.631180 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 350b08bdab0acad03136847d9181c56338ee18fd1ff1a06b655da84c3220ac3e"
	I1002 22:04:28.699486 1160029 logs.go:123] Gathering logs for CRI-O ...
	I1002 22:04:28.699511 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 22:04:28.764153 1160029 logs.go:123] Gathering logs for container status ...
	I1002 22:04:28.764195 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 22:04:28.836328 1160029 logs.go:123] Gathering logs for kubelet ...
	I1002 22:04:28.836357 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 22:04:28.979109 1160029 logs.go:123] Gathering logs for dmesg ...
	I1002 22:04:28.979149 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 22:04:29.007116 1160029 logs.go:123] Gathering logs for describe nodes ...
	I1002 22:04:29.007149 1160029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1002 22:04:31.953970 1152871 kubeadm.go:787] kubelet initialised
	I1002 22:04:31.953995 1152871 kubeadm.go:788] duration metric: took 46.414355607s waiting for restarted kubelet to initialise ...
	I1002 22:04:31.954004 1152871 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:31.960759 1152871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.971397 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.971420 1152871 pod_ready.go:81] duration metric: took 10.632329ms waiting for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.971433 1152871 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.977952 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.977993 1152871 pod_ready.go:81] duration metric: took 6.551371ms waiting for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.978045 1152871 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.984427 1152871 pod_ready.go:92] pod "etcd-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.984454 1152871 pod_ready.go:81] duration metric: took 6.398731ms waiting for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.984471 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.991091 1152871 pod_ready.go:92] pod "kube-apiserver-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:31.991115 1152871 pod_ready.go:81] duration metric: took 6.63686ms waiting for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:31.991130 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.352175 1152871 pod_ready.go:92] pod "kube-controller-manager-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:32.352200 1152871 pod_ready.go:81] duration metric: took 361.06133ms waiting for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.352213 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.752950 1152871 pod_ready.go:92] pod "kube-proxy-pqzpr" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:32.752978 1152871 pod_ready.go:81] duration metric: took 400.756574ms waiting for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:32.752990 1152871 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.153061 1152871 pod_ready.go:92] pod "kube-scheduler-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:33.153089 1152871 pod_ready.go:81] duration metric: took 400.091109ms waiting for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.153098 1152871 pod_ready.go:38] duration metric: took 1.19908599s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:33.153115 1152871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 22:04:33.162793 1152871 ops.go:34] apiserver oom_adj: -16
	I1002 22:04:33.162815 1152871 kubeadm.go:640] restartCluster took 4m33.713301751s
	I1002 22:04:33.162824 1152871 kubeadm.go:406] StartCluster complete in 4m33.866735038s
	I1002 22:04:33.162841 1152871 settings.go:142] acquiring lock: {Name:mk84ed9b341869374b10cf082af1bfa542d39dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:33.162907 1152871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:04:33.163820 1152871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17323-1042317/kubeconfig: {Name:mk6186c13a5b804fd6de8f5697b568acedb59886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 22:04:33.164699 1152871 kapi.go:59] client config for pause-050274: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/client.crt", KeyFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/pause-050274/client.key", CAFile:"/home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x169ede0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 22:04:33.165290 1152871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 22:04:33.165415 1152871 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 22:04:33.168719 1152871 out.go:177] * Enabled addons: 
	I1002 22:04:33.165655 1152871 config.go:182] Loaded profile config "pause-050274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 22:04:33.172217 1152871 addons.go:502] enable addons completed in 6.784519ms: enabled=[]
	I1002 22:04:33.190183 1152871 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-050274" context rescaled to 1 replicas
	I1002 22:04:33.190265 1152871 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 22:04:33.193364 1152871 out.go:177] * Verifying Kubernetes components...
	I1002 22:04:33.196201 1152871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:04:33.312721 1152871 node_ready.go:35] waiting up to 6m0s for node "pause-050274" to be "Ready" ...
	I1002 22:04:33.312776 1152871 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1002 22:04:33.352441 1152871 node_ready.go:49] node "pause-050274" has status "Ready":"True"
	I1002 22:04:33.352465 1152871 node_ready.go:38] duration metric: took 39.715486ms waiting for node "pause-050274" to be "Ready" ...
	I1002 22:04:33.352476 1152871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:33.556619 1152871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.952859 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:33.952885 1152871 pod_ready.go:81] duration metric: took 396.231877ms waiting for pod "coredns-5dd5756b68-cm5nm" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:33.952897 1152871 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.352641 1152871 pod_ready.go:92] pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:34.352667 1152871 pod_ready.go:81] duration metric: took 399.762642ms waiting for pod "coredns-5dd5756b68-t6nc4" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.352681 1152871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.752022 1152871 pod_ready.go:92] pod "etcd-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:34.752047 1152871 pod_ready.go:81] duration metric: took 399.358164ms waiting for pod "etcd-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:34.752062 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.153270 1152871 pod_ready.go:92] pod "kube-apiserver-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:35.153296 1152871 pod_ready.go:81] duration metric: took 401.226316ms waiting for pod "kube-apiserver-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.153309 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.552499 1152871 pod_ready.go:92] pod "kube-controller-manager-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:35.552524 1152871 pod_ready.go:81] duration metric: took 399.206764ms waiting for pod "kube-controller-manager-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.552537 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.953686 1152871 pod_ready.go:92] pod "kube-proxy-pqzpr" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:35.953746 1152871 pod_ready.go:81] duration metric: took 401.161865ms waiting for pod "kube-proxy-pqzpr" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:35.953781 1152871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:36.353538 1152871 pod_ready.go:92] pod "kube-scheduler-pause-050274" in "kube-system" namespace has status "Ready":"True"
	I1002 22:04:36.353623 1152871 pod_ready.go:81] duration metric: took 399.821088ms waiting for pod "kube-scheduler-pause-050274" in "kube-system" namespace to be "Ready" ...
	I1002 22:04:36.353648 1152871 pod_ready.go:38] duration metric: took 3.0011612s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1002 22:04:36.353702 1152871 api_server.go:52] waiting for apiserver process to appear ...
	I1002 22:04:36.353810 1152871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 22:04:36.367405 1152871 api_server.go:72] duration metric: took 3.177092046s to wait for apiserver process to appear ...
	I1002 22:04:36.367430 1152871 api_server.go:88] waiting for apiserver healthz status ...
	I1002 22:04:36.367448 1152871 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 22:04:36.376325 1152871 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 22:04:36.377804 1152871 api_server.go:141] control plane version: v1.28.2
	I1002 22:04:36.377826 1152871 api_server.go:131] duration metric: took 10.388227ms to wait for apiserver health ...
	I1002 22:04:36.377844 1152871 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 22:04:36.562936 1152871 system_pods.go:59] 8 kube-system pods found
	I1002 22:04:36.563045 1152871 system_pods.go:61] "coredns-5dd5756b68-cm5nm" [18849f27-d4fc-44c4-b9e0-ec7b818e9c76] Running
	I1002 22:04:36.563074 1152871 system_pods.go:61] "coredns-5dd5756b68-t6nc4" [75d35777-c673-4733-aa3b-957c2358719b] Running
	I1002 22:04:36.563124 1152871 system_pods.go:61] "etcd-pause-050274" [cbf7d6f7-1d04-4d76-98b0-76204d0bd925] Running
	I1002 22:04:36.563174 1152871 system_pods.go:61] "kindnet-ztnzr" [ececf515-ef4b-4b91-9456-6530f0dcf4c0] Running
	I1002 22:04:36.563195 1152871 system_pods.go:61] "kube-apiserver-pause-050274" [7d042ae0-0418-4e40-b874-e2fffa8e7786] Running
	I1002 22:04:36.563230 1152871 system_pods.go:61] "kube-controller-manager-pause-050274" [928688d0-f5bf-421a-b0d7-c3069a59ebb2] Running
	I1002 22:04:36.563286 1152871 system_pods.go:61] "kube-proxy-pqzpr" [434448cf-f6fd-45df-a10e-be64371b993e] Running
	I1002 22:04:36.563316 1152871 system_pods.go:61] "kube-scheduler-pause-050274" [22f7c3fc-10e8-4a56-8317-050abd85895d] Running
	I1002 22:04:36.563368 1152871 system_pods.go:74] duration metric: took 185.493954ms to wait for pod list to return data ...
	I1002 22:04:36.563407 1152871 default_sa.go:34] waiting for default service account to be created ...
	I1002 22:04:36.753855 1152871 default_sa.go:45] found service account: "default"
	I1002 22:04:36.753963 1152871 default_sa.go:55] duration metric: took 190.526201ms for default service account to be created ...
	I1002 22:04:36.753996 1152871 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 22:04:36.957594 1152871 system_pods.go:86] 8 kube-system pods found
	I1002 22:04:36.957658 1152871 system_pods.go:89] "coredns-5dd5756b68-cm5nm" [18849f27-d4fc-44c4-b9e0-ec7b818e9c76] Running
	I1002 22:04:36.957687 1152871 system_pods.go:89] "coredns-5dd5756b68-t6nc4" [75d35777-c673-4733-aa3b-957c2358719b] Running
	I1002 22:04:36.957706 1152871 system_pods.go:89] "etcd-pause-050274" [cbf7d6f7-1d04-4d76-98b0-76204d0bd925] Running
	I1002 22:04:36.957741 1152871 system_pods.go:89] "kindnet-ztnzr" [ececf515-ef4b-4b91-9456-6530f0dcf4c0] Running
	I1002 22:04:36.957768 1152871 system_pods.go:89] "kube-apiserver-pause-050274" [7d042ae0-0418-4e40-b874-e2fffa8e7786] Running
	I1002 22:04:36.957789 1152871 system_pods.go:89] "kube-controller-manager-pause-050274" [928688d0-f5bf-421a-b0d7-c3069a59ebb2] Running
	I1002 22:04:36.957827 1152871 system_pods.go:89] "kube-proxy-pqzpr" [434448cf-f6fd-45df-a10e-be64371b993e] Running
	I1002 22:04:36.957851 1152871 system_pods.go:89] "kube-scheduler-pause-050274" [22f7c3fc-10e8-4a56-8317-050abd85895d] Running
	I1002 22:04:36.957873 1152871 system_pods.go:126] duration metric: took 203.818194ms to wait for k8s-apps to be running ...
	I1002 22:04:36.957908 1152871 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 22:04:36.958008 1152871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 22:04:36.976008 1152871 system_svc.go:56] duration metric: took 18.089648ms WaitForService to wait for kubelet.
	I1002 22:04:36.976079 1152871 kubeadm.go:581] duration metric: took 3.78577352s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 22:04:36.976133 1152871 node_conditions.go:102] verifying NodePressure condition ...
	I1002 22:04:37.162000 1152871 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 22:04:37.162074 1152871 node_conditions.go:123] node cpu capacity is 2
	I1002 22:04:37.162098 1152871 node_conditions.go:105] duration metric: took 185.948861ms to run NodePressure ...
	I1002 22:04:37.162124 1152871 start.go:228] waiting for startup goroutines ...
	I1002 22:04:37.162156 1152871 start.go:233] waiting for cluster config update ...
	I1002 22:04:37.162182 1152871 start.go:242] writing updated cluster config ...
	I1002 22:04:37.162558 1152871 ssh_runner.go:195] Run: rm -f paused
	I1002 22:04:37.291549 1152871 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1002 22:04:37.295187 1152871 out.go:177] * Done! kubectl is now configured to use "pause-050274" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.201214786Z" level=info msg="Created container f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110: kube-system/kube-proxy-pqzpr/kube-proxy" id=d696ffc2-baf4-446d-b13b-d258ced9a7f4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.202026696Z" level=info msg="Starting container: f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110" id=9502d89c-74fe-4122-9dc4-e4b5473b3796 name=/runtime.v1.RuntimeService/StartContainer
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.305889913Z" level=info msg="Started container" PID=4501 containerID=f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110 description=kube-system/kube-proxy-pqzpr/kube-proxy id=9502d89c-74fe-4122-9dc4-e4b5473b3796 name=/runtime.v1.RuntimeService/StartContainer sandboxID=daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.833687955Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.865689735Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.865770523Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.865793234Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.898772028Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.898819010Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.898841484Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.944155425Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.944195909Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.944215289Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.969627127Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 02 22:03:27 pause-050274 crio[2728]: time="2023-10-02 22:03:27.969667094Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 02 22:04:33 pause-050274 crio[2728]: time="2023-10-02 22:04:33.219071775Z" level=info msg="Stopping container: fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d (timeout: 30s)" id=230c154e-7672-409a-82f2-7bb4709a64f6 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.338394362Z" level=info msg="Stopped container fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d: kube-system/coredns-5dd5756b68-cm5nm/coredns" id=230c154e-7672-409a-82f2-7bb4709a64f6 name=/runtime.v1.RuntimeService/StopContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.339309697Z" level=info msg="Stopping pod sandbox: 093a3b8136476ffab092d7821a9f9530a39d97ba174fd7e2d516a1e25fec8b4b" id=de830900-ab8c-45fe-8c2c-b5c9053bbd73 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.340355173Z" level=info msg="Got pod network &{Name:coredns-5dd5756b68-cm5nm Namespace:kube-system ID:093a3b8136476ffab092d7821a9f9530a39d97ba174fd7e2d516a1e25fec8b4b UID:18849f27-d4fc-44c4-b9e0-ec7b818e9c76 NetNS:/var/run/netns/81290806-fef1-4e7c-9cdb-d4297276d789 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.340518676Z" level=info msg="Deleting pod kube-system_coredns-5dd5756b68-cm5nm from CNI network \"kindnet\" (type=ptp)"
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.370439671Z" level=info msg="Stopped pod sandbox: 093a3b8136476ffab092d7821a9f9530a39d97ba174fd7e2d516a1e25fec8b4b" id=de830900-ab8c-45fe-8c2c-b5c9053bbd73 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.751645501Z" level=info msg="Removing container: fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d" id=11fe00ec-4a6a-4298-9f28-d728ea3c1944 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.782418382Z" level=info msg="Removed container fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d: kube-system/coredns-5dd5756b68-cm5nm/coredns" id=11fe00ec-4a6a-4298-9f28-d728ea3c1944 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.784455550Z" level=info msg="Removing container: 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60" id=6e22342f-dfe5-455b-ba51-c90fb0006358 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 02 22:04:38 pause-050274 crio[2728]: time="2023-10-02 22:04:38.810572468Z" level=info msg="Removed container 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60: kube-system/coredns-5dd5756b68-cm5nm/coredns" id=6e22342f-dfe5-455b-ba51-c90fb0006358 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b11cf6cfdfbdc       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   About a minute ago   Running             kindnet-cni               3                   f65507beae24a       kindnet-ztnzr
	f7e4a5a8ab188       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   About a minute ago   Running             kube-proxy                3                   daf3f7c3b2ad8       kube-proxy-pqzpr
	8632e64640b55       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   About a minute ago   Running             coredns                   3                   97e11f94c241b       coredns-5dd5756b68-t6nc4
	bbb1e358b1459       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   About a minute ago   Running             kube-controller-manager   4                   26de429363aca       kube-controller-manager-pause-050274
	07ba4f10da84d       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   2 minutes ago        Running             etcd                      3                   9fcc0372960b9       etcd-pause-050274
	8eaba24185fb3       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c   2 minutes ago        Exited              kube-controller-manager   3                   26de429363aca       kube-controller-manager-pause-050274
	ae4711ea86465       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   2 minutes ago        Running             kube-scheduler            3                   e42a1887c1ea2       kube-scheduler-pause-050274
	a19e78a138148       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   4 minutes ago        Running             kube-apiserver            2                   7be7cba416b4c       kube-apiserver-pause-050274
	75fb3c3a6e10b       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   4 minutes ago        Exited              kindnet-cni               2                   f65507beae24a       kindnet-ztnzr
	1c2c796686a0d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 minutes ago        Exited              coredns                   2                   97e11f94c241b       coredns-5dd5756b68-t6nc4
	47232deeac89d       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa   4 minutes ago        Exited              kube-proxy                2                   daf3f7c3b2ad8       kube-proxy-pqzpr
	ce0a25ea6fc39       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7   4 minutes ago        Exited              kube-scheduler            2                   e42a1887c1ea2       kube-scheduler-pause-050274
	4b6c0654becf2       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   4 minutes ago        Exited              etcd                      2                   9fcc0372960b9       etcd-pause-050274
	930be0a17a5f5       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c   5 minutes ago        Exited              kube-apiserver            1                   7be7cba416b4c       kube-apiserver-pause-050274
	
	* 
	* ==> coredns [1c2c796686a0d2b433f286baa594edaef8d52d3077deb134160549bb26d8d794] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55643 - 45390 "HINFO IN 7808826954906539208.9011266081881239613. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024287056s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [8632e64640b550b270e902184a6b556f14cbf57afb6741596c54feeff9272049] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58382 - 34606 "HINFO IN 7509761253326897971.1672519378214393664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020881947s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-050274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-050274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=02d3b4696241894a75ebcb6562f5842e65de7b86
	                    minikube.k8s.io/name=pause-050274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T21_58_38_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 21:58:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-050274
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 02 Oct 2023 22:04:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 22:03:26 +0000   Mon, 02 Oct 2023 21:59:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-050274
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 8cf5697589254791a1250402ed5024c0
	  System UUID:                f94a85a1-cfe2-427c-9e7a-9d431d040be8
	  Boot ID:                    37d51973-0c20-4c15-81f3-7000eb353560
	  Kernel Version:             5.15.0-1045-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-t6nc4                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m53s
	  kube-system                 etcd-pause-050274                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m6s
	  kube-system                 kindnet-ztnzr                           100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m53s
	  kube-system                 kube-apiserver-pause-050274             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-pause-050274    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-proxy-pqzpr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-scheduler-pause-050274             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m51s                  kube-proxy       
	  Normal   Starting                 74s                    kube-proxy       
	  Normal   Starting                 4m28s                  kube-proxy       
	  Normal   NodeHasSufficientPID     6m17s (x8 over 6m17s)  kubelet          Node pause-050274 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node pause-050274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m17s (x8 over 6m17s)  kubelet          Node pause-050274 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m6s                   kubelet          Node pause-050274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m6s                   kubelet          Node pause-050274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m6s                   kubelet          Node pause-050274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m54s                  node-controller  Node pause-050274 event: Registered Node pause-050274 in Controller
	  Normal   NodeReady                5m22s                  kubelet          Node pause-050274 status is now: NodeReady
	  Warning  ContainerGCFailed        5m6s                   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeHasSufficientMemory  80s (x6 over 4m16s)    kubelet          Node pause-050274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s (x6 over 4m16s)    kubelet          Node pause-050274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s (x6 over 4m16s)    kubelet          Node pause-050274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           75s                    node-controller  Node pause-050274 event: Registered Node pause-050274 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000729] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000b7a96011
	[  +0.001048] FS-Cache: N-key=[8] '7e613b0000000000'
	[  +0.003162] FS-Cache: Duplicate cookie detected
	[  +0.000726] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000949] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000c6b3040d
	[  +0.001031] FS-Cache: O-key=[8] '7e613b0000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000165fee4f
	[  +0.001045] FS-Cache: N-key=[8] '7e613b0000000000'
	[Oct 2 21:34] FS-Cache: Duplicate cookie detected
	[  +0.000705] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=0000000092679c6a
	[  +0.001107] FS-Cache: O-key=[8] '7d613b0000000000'
	[  +0.000706] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000935] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=000000007e0e0088
	[  +0.001044] FS-Cache: N-key=[8] '7d613b0000000000'
	[  +0.310553] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001087] FS-Cache: O-cookie d=00000000c0f15865{9p.inode} n=00000000e895d03e
	[  +0.001082] FS-Cache: O-key=[8] '83613b0000000000'
	[  +0.000736] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=00000000c0f15865{9p.inode} n=00000000734ba06c
	[  +0.001060] FS-Cache: N-key=[8] '83613b0000000000'
	[  +1.089292] 9pnet: p9_fd_create_tcp (1073420): problem connecting socket to 192.168.49.1
	
	* 
	* ==> etcd [07ba4f10da84d914fff7b7e014fff406c9ccc828f24578f2b96f7cd246943edb] <==
	* {"level":"info","ts":"2023-10-02T22:02:26.554608Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-02T22:02:26.554838Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:02:26.554854Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:02:26.555085Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-10-02T22:02:26.555252Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T22:02:26.555287Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T22:02:26.555298Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-02T22:02:26.555533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-10-02T22:02:26.555596Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-10-02T22:02:26.555671Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T22:02:26.5557Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-02T22:02:27.737358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-02T22:02:27.737497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-02T22:02:27.737565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-02T22:02:27.737617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.737649Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.737684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.737715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2023-10-02T22:02:27.738893Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-050274 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T22:02:27.738982Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:02:27.740292Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T22:02:27.739003Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:02:27.741666Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-02T22:02:27.74526Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T22:02:27.745335Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [4b6c0654becf288c87055f9c9f13305ebd59a5cffca4bbb0ee62ee0194f39959] <==
	* {"level":"info","ts":"2023-10-02T21:59:58.835595Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:00:00.706286Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-02T22:00:00.706441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-02T22:00:00.706502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-10-02T22:00:00.706546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.706586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.70664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.706676Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-10-02T22:00:00.708678Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-050274 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-02T22:00:00.708924Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:00:00.710121Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-02T22:00:00.710381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-02T22:00:00.711497Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-10-02T22:00:00.716198Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-02T22:00:00.716308Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-02T22:00:20.674538Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-02T22:00:20.674608Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-050274","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2023-10-02T22:00:20.674693Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T22:00:20.675244Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T22:00:20.695512Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-02T22:00:20.695613Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-02T22:00:20.695704Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-10-02T22:00:20.712899Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:00:20.713103Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-10-02T22:00:20.713161Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-050274","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:04:42 up  4:47,  0 users,  load average: 1.48, 1.74, 1.84
	Linux pause-050274 5.15.0-1045-aws #50~20.04.1-Ubuntu SMP Wed Sep 6 17:32:55 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [75fb3c3a6e10bfcc10de368a22085c1400aacb9b43d4a54a964306c72f3a9f2f] <==
	* I1002 22:00:03.935714       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 22:00:03.937447       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1002 22:00:03.937962       1 main.go:116] setting mtu 1500 for CNI 
	I1002 22:00:03.938065       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 22:00:03.938138       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 22:00:04.225379       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:04.225716       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:05.226355       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:07.227439       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I1002 22:00:13.923686       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:00:13.925618       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [b11cf6cfdfbdc86a6187d98e8438bab3cedfc9bc9b73c7e9dbc5c1368cfb10f4] <==
	* I1002 22:03:27.182019       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1002 22:03:27.182103       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I1002 22:03:27.182285       1 main.go:116] setting mtu 1500 for CNI 
	I1002 22:03:27.182298       1 main.go:146] kindnetd IP family: "ipv4"
	I1002 22:03:27.182312       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1002 22:03:27.831441       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:27.833379       1 main.go:227] handling current node
	I1002 22:03:37.856822       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:37.856863       1 main.go:227] handling current node
	I1002 22:03:47.869037       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:47.869062       1 main.go:227] handling current node
	I1002 22:03:57.873591       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:03:57.873758       1 main.go:227] handling current node
	I1002 22:04:07.885654       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:07.885776       1 main.go:227] handling current node
	I1002 22:04:17.900813       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:17.900942       1 main.go:227] handling current node
	I1002 22:04:27.905027       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:27.905059       1 main.go:227] handling current node
	I1002 22:04:37.922386       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I1002 22:04:37.922518       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [930be0a17a5f5ea3c215d09f8e87f30473030cd7242fdc8246c2a716a0f170ca] <==
	* W1002 21:59:53.436401       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:59:55.093990       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1002 21:59:55.318412       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1002 21:59:57.832668       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	* 
	* ==> kube-apiserver [a19e78a138148f9cdd9939ea6967b86f22404dd61121c460ac0d60fb6451ab9c] <==
	* Trace[912481136]: [17.063390309s] [17.063390309s] END
	I1002 22:03:45.227858       1 trace.go:236] Trace[2133278845]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7c3d6130-d452-4a3f-9fe2-3130dea7af9b,client:192.168.67.2,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.28.2 (linux/arm64) kubernetes/89a4ea3/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (02-Oct-2023 22:03:27.971) (total time: 17256ms):
	Trace[2133278845]: ---"limitedReadBody succeeded" len:2832 35ms (22:03:28.007)
	Trace[2133278845]: ["GuaranteedUpdate etcd3" audit-id:7c3d6130-d452-4a3f-9fe2-3130dea7af9b,key:/daemonsets/kube-system/kube-proxy,type:*apps.DaemonSet,resource:daemonsets.apps 17219ms (22:03:28.008)
	Trace[2133278845]:  ---"Txn call completed" 17192ms (22:03:45.227)]
	Trace[2133278845]: [17.256429476s] [17.256429476s] END
	I1002 22:03:45.230943       1 trace.go:236] Trace[1593935176]: "Get" accept:application/json,audit-id:e2aa66ae-369c-418f-aa6a-504e0e57493f,client:127.0.0.1,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet,user-agent:kubectl/v1.28.2 (linux/arm64) kubernetes/89a4ea3,verb:GET (02-Oct-2023 22:03:31.559) (total time: 13671ms):
	Trace[1593935176]: ---"About to write a response" 13670ms (22:03:45.230)
	Trace[1593935176]: [13.671621211s] [13.671621211s] END
	I1002 22:03:45.231789       1 trace.go:236] Trace[1086804050]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:11614c67-93bc-4d94-944e-3ee112245c8b,client:192.168.67.2,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet/status,user-agent:kube-controller-manager/v1.28.2 (linux/arm64) kubernetes/89a4ea3/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (02-Oct-2023 22:03:27.964) (total time: 17267ms):
	Trace[1086804050]: ["GuaranteedUpdate etcd3" audit-id:11614c67-93bc-4d94-944e-3ee112245c8b,key:/daemonsets/kube-system/kindnet,type:*apps.DaemonSet,resource:daemonsets.apps 17259ms (22:03:27.972)
	Trace[1086804050]:  ---"About to Encode" 78ms (22:03:28.054)
	Trace[1086804050]:  ---"Txn call completed" 17175ms (22:03:45.230)]
	Trace[1086804050]: [17.267139942s] [17.267139942s] END
	I1002 22:03:45.252676       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1002 22:03:45.420055       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1002 22:03:45.431316       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1002 22:03:45.521457       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 22:03:45.529710       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	E1002 22:03:53.893288       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1002 22:04:03.894117       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1002 22:04:13.895052       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1002 22:04:23.896304       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	I1002 22:04:33.193783       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1002 22:04:33.897467       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	* 
	* ==> kube-controller-manager [8eaba24185fb3e977406703017315b7a8ebdd682ef65e0e2c0c2aa28bf4cdbec] <==
	* I1002 22:02:27.108963       1 serving.go:348] Generated self-signed cert in-memory
	I1002 22:02:27.614929       1 controllermanager.go:189] "Starting" version="v1.28.2"
	I1002 22:02:27.614962       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:02:27.616267       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1002 22:02:27.616400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1002 22:02:27.617354       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1002 22:02:27.617416       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E1002 22:02:41.643821       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[-]etcd failed: reason withheld\\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[+]poststarthook/rbac/bootstrap-roles ok\\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/start-system-namespaces-contro
ller ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-token-tracking-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-status-available-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	* 
	* ==> kube-controller-manager [bbb1e358b145912c2ca24bdaf715f057b64d96b3d2ad42d296be9f3ee64227dd] <==
	* I1002 22:03:27.781645       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-050274"
	I1002 22:03:27.781786       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1002 22:03:27.781968       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1002 22:03:27.782154       1 taint_manager.go:211] "Sending events to api server"
	I1002 22:03:27.785950       1 event.go:307] "Event occurred" object="pause-050274" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-050274 event: Registered Node pause-050274 in Controller"
	I1002 22:03:28.073952       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="403.411662ms"
	I1002 22:03:28.085453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.933µs"
	I1002 22:03:28.113288       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:03:28.113391       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1002 22:03:28.155479       1 shared_informer.go:318] Caches are synced for garbage collector
	I1002 22:03:28.173402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.9µs"
	I1002 22:03:28.646470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.229298ms"
	I1002 22:03:28.646649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="141.021µs"
	I1002 22:03:28.679265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.686211ms"
	I1002 22:03:28.679398       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.082µs"
	I1002 22:03:32.782364       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1002 22:04:33.201744       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1002 22:04:33.225822       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cm5nm"
	I1002 22:04:33.271617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.033127ms"
	I1002 22:04:33.289595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.90569ms"
	I1002 22:04:33.289873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.1µs"
	I1002 22:04:38.394802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.931µs"
	I1002 22:04:38.768534       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.095µs"
	I1002 22:04:38.778754       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.598µs"
	I1002 22:04:38.796035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.1µs"
	
	* 
	* ==> kube-proxy [47232deeac89ddbb5fe9c1445105e8e2f3fc2ff7097b9942b416ddaa52fbcc66] <==
	* I1002 22:00:04.024604       1 server_others.go:69] "Using iptables proxy"
	E1002 22:00:04.027923       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-050274": dial tcp 192.168.67.2:8443: connect: connection refused
	E1002 22:00:05.184639       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-050274": dial tcp 192.168.67.2:8443: connect: connection refused
	E1002 22:00:07.383833       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-050274": dial tcp 192.168.67.2:8443: connect: connection refused
	I1002 22:00:14.012806       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1002 22:00:14.118921       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:00:14.125460       1 server_others.go:152] "Using iptables Proxier"
	I1002 22:00:14.125507       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 22:00:14.125515       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 22:00:14.125567       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 22:00:14.125820       1 server.go:846] "Version info" version="v1.28.2"
	I1002 22:00:14.125838       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:00:14.127280       1 config.go:188] "Starting service config controller"
	I1002 22:00:14.127336       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 22:00:14.127370       1 config.go:97] "Starting endpoint slice config controller"
	I1002 22:00:14.127374       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 22:00:14.127913       1 config.go:315] "Starting node config controller"
	I1002 22:00:14.127932       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 22:00:14.227461       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 22:00:14.227516       1 shared_informer.go:318] Caches are synced for service config
	I1002 22:00:14.228127       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f7e4a5a8ab1883e9ccca70bcbe52e4db6b05fa6a4d51dc58a7ffc387d15f1110] <==
	* I1002 22:03:28.225482       1 server_others.go:69] "Using iptables proxy"
	I1002 22:03:28.254620       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I1002 22:03:28.301023       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 22:03:28.304770       1 server_others.go:152] "Using iptables Proxier"
	I1002 22:03:28.304889       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1002 22:03:28.304935       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1002 22:03:28.305083       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1002 22:03:28.305678       1 server.go:846] "Version info" version="v1.28.2"
	I1002 22:03:28.306003       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:03:28.307068       1 config.go:188] "Starting service config controller"
	I1002 22:03:28.307210       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1002 22:03:28.307314       1 config.go:97] "Starting endpoint slice config controller"
	I1002 22:03:28.307356       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1002 22:03:28.308094       1 config.go:315] "Starting node config controller"
	I1002 22:03:28.310701       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1002 22:03:28.408220       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1002 22:03:28.408327       1 shared_informer.go:318] Caches are synced for service config
	I1002 22:03:28.411617       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ae4711ea86465dc3ba99ae5e161fcb3dd98b535398d61346c9ba6deea18960ec] <==
	* I1002 22:02:27.787978       1 serving.go:348] Generated self-signed cert in-memory
	I1002 22:03:20.698146       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1002 22:03:20.698179       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 22:03:20.713936       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1002 22:03:20.714046       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1002 22:03:20.714132       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 22:03:20.714169       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:03:20.714211       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 22:03:20.714240       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1002 22:03:20.714620       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1002 22:03:20.714695       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1002 22:03:20.814266       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1002 22:03:20.814269       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1002 22:03:20.814402       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kube-scheduler [ce0a25ea6fc39bf9f451efb51a555f4984837f8f9d66bb4c3d4c8e5757a11601] <==
	* E1002 22:00:09.571010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.67.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1002 22:00:09.574380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	E1002 22:00:09.574418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.67.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.67.2:8443: connect: connection refused
	W1002 22:00:13.962669       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.963683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.963838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 22:00:13.963879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1002 22:00:13.963953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 22:00:13.963991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1002 22:00:13.964069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.964105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.964177       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.964212       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.964281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1002 22:00:13.964333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1002 22:00:13.964405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 22:00:13.964446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1002 22:00:13.964511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 22:00:13.964546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1002 22:00:13.964614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1002 22:00:13.964649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1002 22:00:15.366681       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1002 22:00:20.512563       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I1002 22:00:20.513162       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E1002 22:00:20.513708       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.383681    3953 manager.go:1106] Failed to create existing container: /crio-7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Error finding container 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Status 404 returned error can't find the container with id 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.383900    3953 manager.go:1106] Failed to create existing container: /crio-9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Error finding container 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Status 404 returned error can't find the container with id 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384123    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Error finding container 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50: Status 404 returned error can't find the container with id 7be7cba416b4cc3d2fcbd9a16d60145c822457d6851b69ebd7810d0b2997fe50
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384351    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-26de429363acad118e38006fd3aa287d2349e596f2518caa6775a3c2e2f5389e: Error finding container 26de429363acad118e38006fd3aa287d2349e596f2518caa6775a3c2e2f5389e: Status 404 returned error can't find the container with id 26de429363acad118e38006fd3aa287d2349e596f2518caa6775a3c2e2f5389e
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384577    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Error finding container daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Status 404 returned error can't find the container with id daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.384846    3953 manager.go:1106] Failed to create existing container: /crio-daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Error finding container daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada: Status 404 returned error can't find the container with id daf3f7c3b2ad856f5956411519d5b54efdcee91d4156e7262a0e81edeebf4ada
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.386724    3953 manager.go:1106] Failed to create existing container: /crio-f65507beae24a929e295cd7cc4cf2aff3bb95f0a3f3c05f31ea2aa3ee6a6c0c8: Error finding container f65507beae24a929e295cd7cc4cf2aff3bb95f0a3f3c05f31ea2aa3ee6a6c0c8: Status 404 returned error can't find the container with id f65507beae24a929e295cd7cc4cf2aff3bb95f0a3f3c05f31ea2aa3ee6a6c0c8
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.386885    3953 manager.go:1106] Failed to create existing container: /crio-97e11f94c241b39d6d247aebcd08bbfc63e977ed05181b85701210333d18c68c: Error finding container 97e11f94c241b39d6d247aebcd08bbfc63e977ed05181b85701210333d18c68c: Status 404 returned error can't find the container with id 97e11f94c241b39d6d247aebcd08bbfc63e977ed05181b85701210333d18c68c
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.387048    3953 manager.go:1106] Failed to create existing container: /docker/cbe09fdff1d2f29956b039a49467940b0b65aa1084eb32d722e2533ddcd7b80f/crio-9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Error finding container 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec: Status 404 returned error can't find the container with id 9fcc0372960b9d2d8ce8c09e89535b7b0ba116375e5456a4aa3a55e9a12a37ec
	Oct 02 22:04:26 pause-050274 kubelet[3953]: E1002 22:04:26.387336    3953 manager.go:1106] Failed to create existing container: /crio-e42a1887c1ea2f468424ded530ce061a02849f5464b757887cee401d73deabca: Error finding container e42a1887c1ea2f468424ded530ce061a02849f5464b757887cee401d73deabca: Status 404 returned error can't find the container with id e42a1887c1ea2f468424ded530ce061a02849f5464b757887cee401d73deabca
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.468477    3953 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw2q6\" (UniqueName: \"kubernetes.io/projected/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-kube-api-access-jw2q6\") pod \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\" (UID: \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\") "
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.468532    3953 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-config-volume\") pod \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\" (UID: \"18849f27-d4fc-44c4-b9e0-ec7b818e9c76\") "
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.469557    3953 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-config-volume" (OuterVolumeSpecName: "config-volume") pod "18849f27-d4fc-44c4-b9e0-ec7b818e9c76" (UID: "18849f27-d4fc-44c4-b9e0-ec7b818e9c76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.474550    3953 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-kube-api-access-jw2q6" (OuterVolumeSpecName: "kube-api-access-jw2q6") pod "18849f27-d4fc-44c4-b9e0-ec7b818e9c76" (UID: "18849f27-d4fc-44c4-b9e0-ec7b818e9c76"). InnerVolumeSpecName "kube-api-access-jw2q6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.569350    3953 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jw2q6\" (UniqueName: \"kubernetes.io/projected/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-kube-api-access-jw2q6\") on node \"pause-050274\" DevicePath \"\""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.569394    3953 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18849f27-d4fc-44c4-b9e0-ec7b818e9c76-config-volume\") on node \"pause-050274\" DevicePath \"\""
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.749523    3953 scope.go:117] "RemoveContainer" containerID="fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.783190    3953 scope.go:117] "RemoveContainer" containerID="1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.811423    3953 scope.go:117] "RemoveContainer" containerID="fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: E1002 22:04:38.811924    3953 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist" containerID="fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.812026    3953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d"} err="failed to get container status \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": rpc error: code = NotFound desc = could not find container \"fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d\": container with ID starting with fbdd2662bde132ad16409151c2dc37df789dbfa5d7cb4bb13676d3a37527c27d not found: ID does not exist"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.812044    3953 scope.go:117] "RemoveContainer" containerID="1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: E1002 22:04:38.812523    3953 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist" containerID="1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"
	Oct 02 22:04:38 pause-050274 kubelet[3953]: I1002 22:04:38.812558    3953 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60"} err="failed to get container status \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": rpc error: code = NotFound desc = could not find container \"1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60\": container with ID starting with 1022ec4d87df9b70303ba06fbce43fd0ba77643f1d17709c2c42ee448ceced60 not found: ID does not exist"
	Oct 02 22:04:40 pause-050274 kubelet[3953]: I1002 22:04:40.122562    3953 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="18849f27-d4fc-44c4-b9e0-ec7b818e9c76" path="/var/lib/kubelet/pods/18849f27-d4fc-44c4-b9e0-ec7b818e9c76/volumes"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-050274 -n pause-050274
helpers_test.go:261: (dbg) Run:  kubectl --context pause-050274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (319.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (75.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.107089978.exe start -p stopped-upgrade-283217 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1002 22:05:46.832763 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.107089978.exe start -p stopped-upgrade-283217 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.536396085s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.107089978.exe -p stopped-upgrade-283217 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.107089978.exe -p stopped-upgrade-283217 stop: (2.43305336s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-283217 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-283217 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.767715048s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-283217] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-283217 in cluster stopped-upgrade-283217
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-283217" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:05:56.046039 1172052 out.go:296] Setting OutFile to fd 1 ...
	I1002 22:05:56.046374 1172052 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:05:56.046407 1172052 out.go:309] Setting ErrFile to fd 2...
	I1002 22:05:56.046429 1172052 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:05:56.046724 1172052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 22:05:56.047172 1172052 out.go:303] Setting JSON to false
	I1002 22:05:56.050140 1172052 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17303,"bootTime":1696267053,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 22:05:56.050250 1172052 start.go:138] virtualization:  
	I1002 22:05:56.054487 1172052 out.go:177] * [stopped-upgrade-283217] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 22:05:56.056384 1172052 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1002 22:05:56.064616 1172052 notify.go:220] Checking for updates...
	I1002 22:05:56.071192 1172052 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 22:05:56.073697 1172052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:05:56.075711 1172052 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:05:56.077811 1172052 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 22:05:56.080673 1172052 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:05:56.082900 1172052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:05:56.085646 1172052 config.go:182] Loaded profile config "stopped-upgrade-283217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 22:05:56.088103 1172052 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1002 22:05:56.090184 1172052 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 22:05:56.158029 1172052 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 22:05:56.158123 1172052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:05:56.350774 1172052 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 22:05:56.330403529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:05:56.350876 1172052 docker.go:294] overlay module found
	I1002 22:05:56.352930 1172052 out.go:177] * Using the docker driver based on existing profile
	I1002 22:05:56.351922 1172052 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1002 22:05:56.355947 1172052 start.go:298] selected driver: docker
	I1002 22:05:56.355961 1172052 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-283217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-283217 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.82 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 22:05:56.356060 1172052 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:05:56.356641 1172052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:05:56.485903 1172052 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 22:05:56.474937511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:05:56.486204 1172052 cni.go:84] Creating CNI manager for ""
	I1002 22:05:56.486214 1172052 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 22:05:56.486226 1172052 start_flags.go:321] config:
	{Name:stopped-upgrade-283217 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-283217 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.82 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 22:05:56.492691 1172052 out.go:177] * Starting control plane node stopped-upgrade-283217 in cluster stopped-upgrade-283217
	I1002 22:05:56.494735 1172052 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 22:05:56.496602 1172052 out.go:177] * Pulling base image ...
	I1002 22:05:56.498615 1172052 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1002 22:05:56.498695 1172052 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1002 22:05:56.519412 1172052 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1002 22:05:56.519440 1172052 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1002 22:05:56.567839 1172052 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1002 22:05:56.568007 1172052 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/stopped-upgrade-283217/config.json ...
	I1002 22:05:56.568049 1172052 cache.go:107] acquiring lock: {Name:mk828a58fff182971a82ba27f7f0d1f9658a0a29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568141 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1002 22:05:56.568151 1172052 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.833µs
	I1002 22:05:56.568159 1172052 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1002 22:05:56.568171 1172052 cache.go:107] acquiring lock: {Name:mk57bf96569c09fe168ec1fb0058d1b2744351c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568201 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1002 22:05:56.568206 1172052 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 36.348µs
	I1002 22:05:56.568213 1172052 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1002 22:05:56.568229 1172052 cache.go:107] acquiring lock: {Name:mka491d9888aed97f97d4ecaabf6aca59f840d40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568245 1172052 cache.go:195] Successfully downloaded all kic artifacts
	I1002 22:05:56.568255 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1002 22:05:56.568261 1172052 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.738µs
	I1002 22:05:56.568272 1172052 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1002 22:05:56.568281 1172052 cache.go:107] acquiring lock: {Name:mk92ffe3650e02e4534f0eb8faffd302ff8f1f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568289 1172052 start.go:365] acquiring machines lock for stopped-upgrade-283217: {Name:mk33f0c0a8bf93c41396e1fcf80f8168ac4b3d2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568307 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1002 22:05:56.568312 1172052 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 32.936µs
	I1002 22:05:56.568319 1172052 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1002 22:05:56.568330 1172052 start.go:369] acquired machines lock for "stopped-upgrade-283217" in 26.798µs
	I1002 22:05:56.568331 1172052 cache.go:107] acquiring lock: {Name:mkb3c2acda63bbab01db5c8dceb6574a52ff9d85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568344 1172052 start.go:96] Skipping create...Using existing machine configuration
	I1002 22:05:56.568350 1172052 fix.go:54] fixHost starting: 
	I1002 22:05:56.568357 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1002 22:05:56.568363 1172052 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 33.772µs
	I1002 22:05:56.568370 1172052 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1002 22:05:56.568379 1172052 cache.go:107] acquiring lock: {Name:mkbe0c3870f8630be7dbc27575b7b58ed198ae78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568403 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1002 22:05:56.568407 1172052 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.596µs
	I1002 22:05:56.568413 1172052 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1002 22:05:56.568422 1172052 cache.go:107] acquiring lock: {Name:mk8b14f4ccec47ae702a829037d0fc81a29408e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568444 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1002 22:05:56.568449 1172052 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.521µs
	I1002 22:05:56.568455 1172052 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1002 22:05:56.568475 1172052 cache.go:107] acquiring lock: {Name:mke4bd636e55f1c34266bcf6f1138c0d3f8866c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 22:05:56.568505 1172052 cache.go:115] /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1002 22:05:56.568510 1172052 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 48.016µs
	I1002 22:05:56.568517 1172052 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1002 22:05:56.568523 1172052 cache.go:87] Successfully saved all images to host disk.
	I1002 22:05:56.568608 1172052 cli_runner.go:164] Run: docker container inspect stopped-upgrade-283217 --format={{.State.Status}}
	I1002 22:05:56.603055 1172052 fix.go:102] recreateIfNeeded on stopped-upgrade-283217: state=Stopped err=<nil>
	W1002 22:05:56.603092 1172052 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 22:05:56.606361 1172052 out.go:177] * Restarting existing docker container for "stopped-upgrade-283217" ...
	I1002 22:05:56.608457 1172052 cli_runner.go:164] Run: docker start stopped-upgrade-283217
	I1002 22:05:57.179384 1172052 cli_runner.go:164] Run: docker container inspect stopped-upgrade-283217 --format={{.State.Status}}
	I1002 22:05:57.203400 1172052 kic.go:426] container "stopped-upgrade-283217" state is running.
	I1002 22:05:57.203797 1172052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-283217
	I1002 22:05:57.228215 1172052 profile.go:148] Saving config to /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/stopped-upgrade-283217/config.json ...
	I1002 22:05:57.228452 1172052 machine.go:88] provisioning docker machine ...
	I1002 22:05:57.228466 1172052 ubuntu.go:169] provisioning hostname "stopped-upgrade-283217"
	I1002 22:05:57.228516 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:05:57.258413 1172052 main.go:141] libmachine: Using SSH client type: native
	I1002 22:05:57.258831 1172052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33907 <nil> <nil>}
	I1002 22:05:57.258844 1172052 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-283217 && echo "stopped-upgrade-283217" | sudo tee /etc/hostname
	I1002 22:05:57.259611 1172052 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1002 22:06:00.445425 1172052 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-283217
	
	I1002 22:06:00.445571 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:06:00.477097 1172052 main.go:141] libmachine: Using SSH client type: native
	I1002 22:06:00.477568 1172052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33907 <nil> <nil>}
	I1002 22:06:00.477587 1172052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-283217' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-283217/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-283217' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 22:06:00.699659 1172052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 22:06:00.699686 1172052 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17323-1042317/.minikube CaCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17323-1042317/.minikube}
	I1002 22:06:00.699706 1172052 ubuntu.go:177] setting up certificates
	I1002 22:06:00.699715 1172052 provision.go:83] configureAuth start
	I1002 22:06:00.699775 1172052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-283217
	I1002 22:06:00.755173 1172052 provision.go:138] copyHostCerts
	I1002 22:06:00.755271 1172052 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem, removing ...
	I1002 22:06:00.755281 1172052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem
	I1002 22:06:00.755365 1172052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.pem (1082 bytes)
	I1002 22:06:00.755466 1172052 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem, removing ...
	I1002 22:06:00.755475 1172052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem
	I1002 22:06:00.755513 1172052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/cert.pem (1123 bytes)
	I1002 22:06:00.755575 1172052 exec_runner.go:144] found /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem, removing ...
	I1002 22:06:00.755580 1172052 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem
	I1002 22:06:00.755604 1172052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17323-1042317/.minikube/key.pem (1679 bytes)
	I1002 22:06:00.755655 1172052 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-283217 san=[192.168.59.82 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-283217]
	I1002 22:06:01.231922 1172052 provision.go:172] copyRemoteCerts
	I1002 22:06:01.232035 1172052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 22:06:01.232117 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:06:01.253044 1172052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/stopped-upgrade-283217/id_rsa Username:docker}
	I1002 22:06:01.370774 1172052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 22:06:01.410933 1172052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 22:06:01.462694 1172052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 22:06:01.512651 1172052 provision.go:86] duration metric: configureAuth took 812.921023ms
	I1002 22:06:01.512731 1172052 ubuntu.go:193] setting minikube options for container-runtime
	I1002 22:06:01.512987 1172052 config.go:182] Loaded profile config "stopped-upgrade-283217": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1002 22:06:01.513152 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:06:01.553929 1172052 main.go:141] libmachine: Using SSH client type: native
	I1002 22:06:01.554349 1172052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3adac0] 0x3b0230 <nil>  [] 0s} 127.0.0.1 33907 <nil> <nil>}
	I1002 22:06:01.554364 1172052 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 22:06:02.203098 1172052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 22:06:02.203125 1172052 machine.go:91] provisioned docker machine in 4.974663694s
	I1002 22:06:02.203139 1172052 start.go:300] post-start starting for "stopped-upgrade-283217" (driver="docker")
	I1002 22:06:02.203150 1172052 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 22:06:02.203223 1172052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 22:06:02.203262 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:06:02.257416 1172052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/stopped-upgrade-283217/id_rsa Username:docker}
	I1002 22:06:02.375631 1172052 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 22:06:02.384126 1172052 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1002 22:06:02.384154 1172052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 22:06:02.384166 1172052 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1002 22:06:02.384179 1172052 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1002 22:06:02.384197 1172052 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/addons for local assets ...
	I1002 22:06:02.384263 1172052 filesync.go:126] Scanning /home/jenkins/minikube-integration/17323-1042317/.minikube/files for local assets ...
	I1002 22:06:02.384352 1172052 filesync.go:149] local asset: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem -> 10477322.pem in /etc/ssl/certs
	I1002 22:06:02.384459 1172052 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 22:06:02.396230 1172052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/ssl/certs/10477322.pem --> /etc/ssl/certs/10477322.pem (1708 bytes)
	I1002 22:06:02.428168 1172052 start.go:303] post-start completed in 225.012149ms
	I1002 22:06:02.428250 1172052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 22:06:02.428294 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:06:02.455296 1172052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/stopped-upgrade-283217/id_rsa Username:docker}
	I1002 22:06:02.557915 1172052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 22:06:02.567028 1172052 fix.go:56] fixHost completed within 5.998667655s
	I1002 22:06:02.567048 1172052 start.go:83] releasing machines lock for "stopped-upgrade-283217", held for 5.998709714s
	I1002 22:06:02.567127 1172052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-283217
	I1002 22:06:02.599056 1172052 ssh_runner.go:195] Run: cat /version.json
	I1002 22:06:02.599070 1172052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 22:06:02.599122 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:06:02.599144 1172052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-283217
	I1002 22:06:02.623189 1172052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/stopped-upgrade-283217/id_rsa Username:docker}
	I1002 22:06:02.661794 1172052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33907 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/stopped-upgrade-283217/id_rsa Username:docker}
	W1002 22:06:02.736571 1172052 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 22:06:02.736654 1172052 ssh_runner.go:195] Run: systemctl --version
	I1002 22:06:02.882533 1172052 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 22:06:03.003059 1172052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 22:06:03.011453 1172052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:06:03.052050 1172052 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1002 22:06:03.052143 1172052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 22:06:03.100835 1172052 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 22:06:03.100863 1172052 start.go:469] detecting cgroup driver to use...
	I1002 22:06:03.100900 1172052 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1002 22:06:03.100963 1172052 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 22:06:03.144576 1172052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 22:06:03.158600 1172052 docker.go:197] disabling cri-docker service (if available) ...
	I1002 22:06:03.158667 1172052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 22:06:03.174019 1172052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 22:06:03.191539 1172052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1002 22:06:03.205936 1172052 docker.go:207] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1002 22:06:03.206001 1172052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 22:06:03.340876 1172052 docker.go:213] disabling docker service ...
	I1002 22:06:03.340948 1172052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 22:06:03.354494 1172052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 22:06:03.368197 1172052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 22:06:03.519137 1172052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 22:06:03.664821 1172052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 22:06:03.681723 1172052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 22:06:03.702540 1172052 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1002 22:06:03.702625 1172052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 22:06:03.718536 1172052 out.go:177] 
	W1002 22:06:03.720676 1172052 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1002 22:06:03.720848 1172052 out.go:239] * 
	* 
	W1002 22:06:03.721938 1172052 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 22:06:03.723972 1172052 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-283217 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (75.75s)

                                                
                                    

Test pass (262/299)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.01
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.17
10 TestDownloadOnly/v1.28.2/json-events 11.41
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.6
22 TestAddons/Setup 159.76
24 TestAddons/parallel/Registry 15.94
26 TestAddons/parallel/InspektorGadget 10.84
27 TestAddons/parallel/MetricsServer 5.9
30 TestAddons/parallel/CSI 74.31
31 TestAddons/parallel/Headlamp 12.3
32 TestAddons/parallel/CloudSpanner 5.59
33 TestAddons/parallel/LocalPath 53.12
36 TestAddons/serial/GCPAuth/Namespaces 0.18
37 TestAddons/StoppedEnableDisable 12.39
38 TestCertOptions 36.02
39 TestCertExpiration 255.37
41 TestForceSystemdFlag 49.5
42 TestForceSystemdEnv 33.56
48 TestErrorSpam/setup 33.41
49 TestErrorSpam/start 0.85
50 TestErrorSpam/status 1.11
51 TestErrorSpam/pause 1.86
52 TestErrorSpam/unpause 2.02
53 TestErrorSpam/stop 1.44
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 77.5
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 41.98
60 TestFunctional/serial/KubeContext 0.06
61 TestFunctional/serial/KubectlGetPods 0.1
64 TestFunctional/serial/CacheCmd/cache/add_remote 4.3
65 TestFunctional/serial/CacheCmd/cache/add_local 1.06
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
67 TestFunctional/serial/CacheCmd/cache/list 0.06
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
69 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
70 TestFunctional/serial/CacheCmd/cache/delete 0.13
71 TestFunctional/serial/MinikubeKubectlCmd 0.14
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
73 TestFunctional/serial/ExtraConfig 36.24
74 TestFunctional/serial/ComponentHealth 0.11
75 TestFunctional/serial/LogsCmd 1.87
76 TestFunctional/serial/LogsFileCmd 1.9
77 TestFunctional/serial/InvalidService 4.7
79 TestFunctional/parallel/ConfigCmd 0.49
80 TestFunctional/parallel/DashboardCmd 10.75
81 TestFunctional/parallel/DryRun 0.49
82 TestFunctional/parallel/InternationalLanguage 0.21
83 TestFunctional/parallel/StatusCmd 1.26
87 TestFunctional/parallel/ServiceCmdConnect 9.65
88 TestFunctional/parallel/AddonsCmd 0.22
89 TestFunctional/parallel/PersistentVolumeClaim 24.99
91 TestFunctional/parallel/SSHCmd 0.78
92 TestFunctional/parallel/CpCmd 1.54
94 TestFunctional/parallel/FileSync 0.35
95 TestFunctional/parallel/CertSync 2.07
99 TestFunctional/parallel/NodeLabels 0.08
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
103 TestFunctional/parallel/License 0.26
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
110 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
114 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
115 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
117 TestFunctional/parallel/ProfileCmd/profile_list 0.41
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
119 TestFunctional/parallel/MountCmd/any-port 8.54
120 TestFunctional/parallel/ServiceCmd/List 0.58
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
123 TestFunctional/parallel/ServiceCmd/Format 0.41
124 TestFunctional/parallel/ServiceCmd/URL 0.41
126 TestFunctional/parallel/Version/short 0.06
127 TestFunctional/parallel/Version/components 0.8
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
132 TestFunctional/parallel/ImageCommands/ImageBuild 3
133 TestFunctional/parallel/ImageCommands/Setup 1.81
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.54
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.96
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.51
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.75
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.99
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.96
145 TestFunctional/delete_addon-resizer_images 0.08
146 TestFunctional/delete_my-image_image 0.02
147 TestFunctional/delete_minikube_cached_images 0.02
151 TestIngressAddonLegacy/StartLegacyK8sCluster 88.44
153 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.6
154 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.74
158 TestJSONOutput/start/Command 76.41
159 TestJSONOutput/start/Audit 0
161 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/pause/Command 0.84
165 TestJSONOutput/pause/Audit 0
167 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/unpause/Command 0.78
171 TestJSONOutput/unpause/Audit 0
173 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/stop/Command 5.91
177 TestJSONOutput/stop/Audit 0
179 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
181 TestErrorJSONOutput 0.28
183 TestKicCustomNetwork/create_custom_network 42.7
184 TestKicCustomNetwork/use_default_bridge_network 32.08
185 TestKicExistingNetwork 34.29
186 TestKicCustomSubnet 35.27
187 TestKicStaticIP 33.58
188 TestMainNoArgs 0.05
189 TestMinikubeProfile 71.56
192 TestMountStart/serial/StartWithMountFirst 7.66
193 TestMountStart/serial/VerifyMountFirst 0.29
194 TestMountStart/serial/StartWithMountSecond 7.14
195 TestMountStart/serial/VerifyMountSecond 0.29
196 TestMountStart/serial/DeleteFirst 1.7
197 TestMountStart/serial/VerifyMountPostDelete 0.29
198 TestMountStart/serial/Stop 1.23
199 TestMountStart/serial/RestartStopped 7.76
200 TestMountStart/serial/VerifyMountPostStop 0.27
203 TestMultiNode/serial/FreshStart2Nodes 99.61
204 TestMultiNode/serial/DeployApp2Nodes 5.89
206 TestMultiNode/serial/AddNode 59.91
207 TestMultiNode/serial/ProfileList 0.34
208 TestMultiNode/serial/CopyFile 11.01
209 TestMultiNode/serial/StopNode 2.36
210 TestMultiNode/serial/StartAfterStop 13.19
211 TestMultiNode/serial/RestartKeepsNodes 124.57
212 TestMultiNode/serial/DeleteNode 5.09
213 TestMultiNode/serial/StopMultiNode 24.07
214 TestMultiNode/serial/RestartMultiNode 82.61
215 TestMultiNode/serial/ValidateNameConflict 33.08
220 TestPreload 147.77
222 TestScheduledStopUnix 111.04
225 TestInsufficientStorage 13.29
228 TestKubernetesUpgrade 378.23
231 TestPause/serial/Start 87.42
233 TestStoppedBinaryUpgrade/Setup 1.12
235 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestNoKubernetes/serial/StartWithK8s 34.35
246 TestNoKubernetes/serial/StartWithStopK8s 6.99
247 TestNoKubernetes/serial/Start 8.87
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
249 TestNoKubernetes/serial/ProfileList 1.03
250 TestNoKubernetes/serial/Stop 1.25
251 TestNoKubernetes/serial/StartNoArgs 6.94
252 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
260 TestNetworkPlugins/group/false 3.71
265 TestStartStop/group/old-k8s-version/serial/FirstStart 113.99
267 TestStartStop/group/no-preload/serial/FirstStart 62.29
268 TestStartStop/group/old-k8s-version/serial/DeployApp 10.65
269 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
270 TestStartStop/group/old-k8s-version/serial/Stop 12.22
271 TestStartStop/group/no-preload/serial/DeployApp 9.54
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.59
273 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
274 TestStartStop/group/old-k8s-version/serial/SecondStart 443.03
275 TestStartStop/group/no-preload/serial/Stop 12.33
276 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
277 TestStartStop/group/no-preload/serial/SecondStart 631.16
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.4
281 TestStartStop/group/old-k8s-version/serial/Pause 3.65
283 TestStartStop/group/embed-certs/serial/FirstStart 51.58
284 TestStartStop/group/embed-certs/serial/DeployApp 9.52
285 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
286 TestStartStop/group/embed-certs/serial/Stop 12.23
287 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
288 TestStartStop/group/embed-certs/serial/SecondStart 354.83
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.38
292 TestStartStop/group/no-preload/serial/Pause 3.56
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.02
295 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.53
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.28
297 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
299 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 345.17
300 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.03
301 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
302 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
303 TestStartStop/group/embed-certs/serial/Pause 3.45
305 TestStartStop/group/newest-cni/serial/FirstStart 42.79
306 TestStartStop/group/newest-cni/serial/DeployApp 0
307 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.21
308 TestStartStop/group/newest-cni/serial/Stop 1.28
309 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
310 TestStartStop/group/newest-cni/serial/SecondStart 31.07
311 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
314 TestStartStop/group/newest-cni/serial/Pause 3.34
315 TestNetworkPlugins/group/auto/Start 78.39
316 TestNetworkPlugins/group/auto/KubeletFlags 0.44
317 TestNetworkPlugins/group/auto/NetCatPod 10.46
318 TestNetworkPlugins/group/auto/DNS 0.3
319 TestNetworkPlugins/group/auto/Localhost 0.22
320 TestNetworkPlugins/group/auto/HairPin 0.22
321 TestNetworkPlugins/group/kindnet/Start 88.99
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.06
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.53
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.52
326 TestNetworkPlugins/group/calico/Start 73.2
327 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
328 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
329 TestNetworkPlugins/group/kindnet/NetCatPod 13.34
330 TestNetworkPlugins/group/calico/ControllerPod 5.04
331 TestNetworkPlugins/group/calico/KubeletFlags 0.36
332 TestNetworkPlugins/group/calico/NetCatPod 10.45
333 TestNetworkPlugins/group/kindnet/DNS 0.3
334 TestNetworkPlugins/group/kindnet/Localhost 0.23
335 TestNetworkPlugins/group/kindnet/HairPin 0.26
336 TestNetworkPlugins/group/calico/DNS 0.34
337 TestNetworkPlugins/group/calico/Localhost 0.27
338 TestNetworkPlugins/group/calico/HairPin 0.27
339 TestNetworkPlugins/group/custom-flannel/Start 72.69
340 TestNetworkPlugins/group/enable-default-cni/Start 94.8
341 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
342 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.36
343 TestNetworkPlugins/group/custom-flannel/DNS 0.23
344 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
345 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
346 TestNetworkPlugins/group/flannel/Start 73.71
347 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
348 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.54
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
352 TestNetworkPlugins/group/bridge/Start 85.43
353 TestNetworkPlugins/group/flannel/ControllerPod 5.03
354 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
355 TestNetworkPlugins/group/flannel/NetCatPod 10.36
356 TestNetworkPlugins/group/flannel/DNS 0.22
357 TestNetworkPlugins/group/flannel/Localhost 0.21
358 TestNetworkPlugins/group/flannel/HairPin 0.2
359 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
360 TestNetworkPlugins/group/bridge/NetCatPod 10.37
361 TestNetworkPlugins/group/bridge/DNS 0.23
362 TestNetworkPlugins/group/bridge/Localhost 0.19
363 TestNetworkPlugins/group/bridge/HairPin 0.21
x
+
TestDownloadOnly/v1.16.0/json-events (16.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-585498 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-585498 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.011558888s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-585498
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-585498: exit status 85 (170.799314ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-585498 | jenkins | v1.31.2 | 02 Oct 23 21:22 UTC |          |
	|         | -p download-only-585498        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 21:22:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:22:37.700250 1047737 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:22:37.700392 1047737 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:37.700401 1047737 out.go:309] Setting ErrFile to fd 2...
	I1002 21:22:37.700407 1047737 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:37.700667 1047737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	W1002 21:22:37.700816 1047737 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17323-1042317/.minikube/config/config.json: open /home/jenkins/minikube-integration/17323-1042317/.minikube/config/config.json: no such file or directory
	I1002 21:22:37.701230 1047737 out.go:303] Setting JSON to true
	I1002 21:22:37.702280 1047737 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14705,"bootTime":1696267053,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:22:37.702358 1047737 start.go:138] virtualization:  
	I1002 21:22:37.705790 1047737 out.go:97] [download-only-585498] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:22:37.708044 1047737 out.go:169] MINIKUBE_LOCATION=17323
	I1002 21:22:37.706130 1047737 notify.go:220] Checking for updates...
	W1002 21:22:37.706029 1047737 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 21:22:37.710421 1047737 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:22:37.712287 1047737 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:22:37.714165 1047737 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:22:37.717339 1047737 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 21:22:37.721823 1047737 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 21:22:37.722118 1047737 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:22:37.747185 1047737 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:22:37.747265 1047737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:37.832992 1047737 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-02 21:22:37.823321615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:22:37.833159 1047737 docker.go:294] overlay module found
	I1002 21:22:37.835501 1047737 out.go:97] Using the docker driver based on user configuration
	I1002 21:22:37.835569 1047737 start.go:298] selected driver: docker
	I1002 21:22:37.835589 1047737 start.go:902] validating driver "docker" against <nil>
	I1002 21:22:37.835726 1047737 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:37.905261 1047737 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-02 21:22:37.895340486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:22:37.905426 1047737 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 21:22:37.905718 1047737 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1002 21:22:37.905880 1047737 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:22:37.908368 1047737 out.go:169] Using Docker driver with root privileges
	I1002 21:22:37.910507 1047737 cni.go:84] Creating CNI manager for ""
	I1002 21:22:37.910531 1047737 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:22:37.910544 1047737 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:22:37.910572 1047737 start_flags.go:321] config:
	{Name:download-only-585498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-585498 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:22:37.912927 1047737 out.go:97] Starting control plane node download-only-585498 in cluster download-only-585498
	I1002 21:22:37.912954 1047737 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:22:37.915231 1047737 out.go:97] Pulling base image ...
	I1002 21:22:37.915261 1047737 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 21:22:37.915405 1047737 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 21:22:37.932889 1047737 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 21:22:37.933042 1047737 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 21:22:37.933146 1047737 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 21:22:37.979938 1047737 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:22:37.979970 1047737 cache.go:57] Caching tarball of preloaded images
	I1002 21:22:37.980111 1047737 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1002 21:22:37.982826 1047737 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 21:22:37.982852 1047737 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1002 21:22:38.110281 1047737 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1002 21:22:42.810718 1047737 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 21:22:52.077626 1047737 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1002 21:22:52.077763 1047737 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-585498"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (11.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-585498 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-585498 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.407845794s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (11.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-585498
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-585498: exit status 85 (80.497939ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-585498 | jenkins | v1.31.2 | 02 Oct 23 21:22 UTC |          |
	|         | -p download-only-585498        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-585498 | jenkins | v1.31.2 | 02 Oct 23 21:22 UTC |          |
	|         | -p download-only-585498        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 21:22:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:22:53.894298 1047813 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:22:53.894586 1047813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:53.894616 1047813 out.go:309] Setting ErrFile to fd 2...
	I1002 21:22:53.894638 1047813 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:53.894969 1047813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	W1002 21:22:53.895178 1047813 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17323-1042317/.minikube/config/config.json: open /home/jenkins/minikube-integration/17323-1042317/.minikube/config/config.json: no such file or directory
	I1002 21:22:53.895471 1047813 out.go:303] Setting JSON to true
	I1002 21:22:53.896642 1047813 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14721,"bootTime":1696267053,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:22:53.896757 1047813 start.go:138] virtualization:  
	I1002 21:22:53.903961 1047813 out.go:97] [download-only-585498] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:22:53.904930 1047813 notify.go:220] Checking for updates...
	I1002 21:22:53.916195 1047813 out.go:169] MINIKUBE_LOCATION=17323
	I1002 21:22:53.927204 1047813 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:22:53.939447 1047813 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:22:53.966777 1047813 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:22:53.998594 1047813 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 21:22:54.045286 1047813 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 21:22:54.045853 1047813 config.go:182] Loaded profile config "download-only-585498": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1002 21:22:54.045946 1047813 start.go:810] api.Load failed for download-only-585498: filestore "download-only-585498": Docker machine "download-only-585498" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 21:22:54.046057 1047813 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 21:22:54.046082 1047813 start.go:810] api.Load failed for download-only-585498: filestore "download-only-585498": Docker machine "download-only-585498" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 21:22:54.071075 1047813 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:22:54.071161 1047813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:54.139649 1047813 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-02 21:22:54.129732672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:22:54.139751 1047813 docker.go:294] overlay module found
	I1002 21:22:54.156347 1047813 out.go:97] Using the docker driver based on existing profile
	I1002 21:22:54.156397 1047813 start.go:298] selected driver: docker
	I1002 21:22:54.156406 1047813 start.go:902] validating driver "docker" against &{Name:download-only-585498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-585498 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:22:54.156597 1047813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:54.222298 1047813 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-10-02 21:22:54.212093911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:22:54.222750 1047813 cni.go:84] Creating CNI manager for ""
	I1002 21:22:54.222767 1047813 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:22:54.222781 1047813 start_flags.go:321] config:
	{Name:download-only-585498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-585498 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:22:54.252965 1047813 out.go:97] Starting control plane node download-only-585498 in cluster download-only-585498
	I1002 21:22:54.253002 1047813 cache.go:122] Beginning downloading kic base image for docker with crio
	I1002 21:22:54.269623 1047813 out.go:97] Pulling base image ...
	I1002 21:22:54.269660 1047813 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:22:54.270061 1047813 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I1002 21:22:54.287165 1047813 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I1002 21:22:54.287320 1047813 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I1002 21:22:54.287345 1047813 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I1002 21:22:54.287354 1047813 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I1002 21:22:54.287408 1047813 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I1002 21:22:54.335972 1047813 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 21:22:54.335996 1047813 cache.go:57] Caching tarball of preloaded images
	I1002 21:22:54.336130 1047813 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1002 21:22:54.363246 1047813 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 21:22:54.363278 1047813 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	I1002 21:22:54.477705 1047813 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:ec283948b04358f92432bdd325b7fb0b -> /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1002 21:23:03.693633 1047813 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	I1002 21:23:03.693740 1047813 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17323-1042317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-585498"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-585498
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-870867 --alsologtostderr --binary-mirror http://127.0.0.1:46249 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-870867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-870867
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/Setup (159.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-linux-arm64 start -p addons-598993 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:89: (dbg) Done: out/minikube-linux-arm64 start -p addons-598993 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m39.759751217s)
--- PASS: TestAddons/Setup (159.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 52.371482ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-84c9d" [9b98d40f-1c78-4339-97e4-d24d9682a23f] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.01704558s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7jxhh" [f095db8c-4c24-442b-809a-c0488c3579ca] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013591232s
addons_test.go:318: (dbg) Run:  kubectl --context addons-598993 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-598993 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-598993 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.725461978s)
addons_test.go:337: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 ip
addons_test.go:366: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bwvpw" [0882ae2f-3d87-482e-8efb-fe1aca29d055] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015179271s
addons_test.go:819: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-598993
addons_test.go:819: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-598993: (5.824522804s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 7.927053ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-qq7vr" [4bbba458-aca3-43cb-9507-4d820720e1d6] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015337032s
addons_test.go:393: (dbg) Run:  kubectl --context addons-598993 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (74.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 56.048346ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-598993 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/10/02 21:26:02 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-598993 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c5b83111-e632-4de1-b25d-cb2444842c41] Pending
helpers_test.go:344: "task-pv-pod" [c5b83111-e632-4de1-b25d-cb2444842c41] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c5b83111-e632-4de1-b25d-cb2444842c41] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.020464685s
addons_test.go:562: (dbg) Run:  kubectl --context addons-598993 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-598993 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-598993 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-598993 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-598993 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-598993 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-598993 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e3e22c0a-cde8-4d56-93b9-f3f1ecb2af66] Pending
helpers_test.go:344: "task-pv-pod-restore" [e3e22c0a-cde8-4d56-93b9-f3f1ecb2af66] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e3e22c0a-cde8-4d56-93b9-f3f1ecb2af66] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.018767967s
addons_test.go:604: (dbg) Run:  kubectl --context addons-598993 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-598993 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-598993 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-arm64 -p addons-598993 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.049709201s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (74.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-598993 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-598993 --alsologtostderr -v=1: (1.268051135s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-2dxlr" [73875ec6-caf6-48c1-9eb3-4323d3f0439f] Pending
helpers_test.go:344: "headlamp-58b88cff49-2dxlr" [73875ec6-caf6-48c1-9eb3-4323d3f0439f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-2dxlr" [73875ec6-caf6-48c1-9eb3-4323d3f0439f] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.027265502s
--- PASS: TestAddons/parallel/Headlamp (12.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-vbbjz" [7c00c6ad-8225-4f97-8d0d-abbb48a6459b] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01401765s
addons_test.go:838: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-598993
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-598993 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-598993 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-598993 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bf57ff5a-d754-4ad2-85c8-172e30b449ac] Pending
helpers_test.go:344: "test-local-path" [bf57ff5a-d754-4ad2-85c8-172e30b449ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bf57ff5a-d754-4ad2-85c8-172e30b449ac] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bf57ff5a-d754-4ad2-85c8-172e30b449ac] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.011538487s
addons_test.go:869: (dbg) Run:  kubectl --context addons-598993 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 ssh "cat /opt/local-path-provisioner/pvc-8a19ccc8-8ac4-441c-9b07-6dca426035a8_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-598993 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-598993 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-linux-arm64 -p addons-598993 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-linux-arm64 -p addons-598993 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.232049352s)
--- PASS: TestAddons/parallel/LocalPath (53.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-598993 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-598993 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-598993
addons_test.go:150: (dbg) Done: out/minikube-linux-arm64 stop -p addons-598993: (12.11391186s)
addons_test.go:154: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-598993
addons_test.go:158: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-598993
addons_test.go:163: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-598993
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (36.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-802310 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-802310 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.282876758s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-802310 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-802310 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-802310 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-802310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-802310
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-802310: (2.007894935s)
--- PASS: TestCertOptions (36.02s)

                                                
                                    
x
+
TestCertExpiration (255.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-474773 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-474773 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.70181211s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-474773 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1002 22:11:19.180802 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 22:11:29.915807 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-474773 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (31.452711217s)
helpers_test.go:175: Cleaning up "cert-expiration-474773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-474773
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-474773: (2.216448987s)
--- PASS: TestCertExpiration (255.37s)

                                                
                                    
x
+
TestForceSystemdFlag (49.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-887415 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-887415 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.365734757s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-887415 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-887415" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-887415
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-887415: (3.773030465s)
--- PASS: TestForceSystemdFlag (49.50s)

                                                
                                    
x
+
TestForceSystemdEnv (33.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-962125 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-962125 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.164263263s)
helpers_test.go:175: Cleaning up "force-systemd-env-962125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-962125
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-962125: (2.390953185s)
--- PASS: TestForceSystemdEnv (33.56s)

                                                
                                    
x
+
TestErrorSpam/setup (33.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-971684 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-971684 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-971684 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-971684 --driver=docker  --container-runtime=crio: (33.40873734s)
--- PASS: TestErrorSpam/setup (33.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 start --dry-run
--- PASS: TestErrorSpam/start (0.85s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 unpause
--- PASS: TestErrorSpam/unpause (2.02s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 stop: (1.242621257s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-971684 --log_dir /tmp/nospam-971684 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17323-1042317/.minikube/files/etc/test/nested/copy/1047732/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-277432 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1002 21:30:46.833222 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:46.847199 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:46.857438 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:46.877685 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:46.917930 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:46.998247 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:47.165674 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:47.486172 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:48.127162 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:49.407370 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:51.968106 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:30:57.088874 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:31:07.329817 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:31:27.810505 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-277432 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.501514953s)
--- PASS: TestFunctional/serial/StartWithProxy (77.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-277432 --alsologtostderr -v=8
E1002 21:32:08.771320 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-277432 --alsologtostderr -v=8: (41.974716836s)
functional_test.go:659: soft start took 41.981097674s for "functional-277432" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-277432 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 cache add registry.k8s.io/pause:3.1: (1.445502847s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 cache add registry.k8s.io/pause:3.3: (1.489258803s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 cache add registry.k8s.io/pause:latest: (1.369485054s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-277432 /tmp/TestFunctionalserialCacheCmdcacheadd_local1077886106/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cache add minikube-local-cache-test:functional-277432
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cache delete minikube-local-cache-test:functional-277432
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-277432
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (332.298532ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 cache reload: (1.163283693s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 kubectl -- --context functional-277432 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-277432 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-277432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-277432 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.243578112s)
functional_test.go:757: restart took 36.243678846s for "functional-277432" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-277432 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 logs: (1.874651667s)
--- PASS: TestFunctional/serial/LogsCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 logs --file /tmp/TestFunctionalserialLogsFileCmd229338679/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 logs --file /tmp/TestFunctionalserialLogsFileCmd229338679/001/logs.txt: (1.895472904s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-277432 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-277432
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-277432: exit status 115 (539.036192ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31697 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-277432 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 config get cpus: exit status 14 (83.959048ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 config get cpus: exit status 14 (92.202072ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-277432 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-277432 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1073111: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-277432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-277432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.328725ms)

                                                
                                                
-- stdout --
	* [functional-277432] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:33:58.138253 1072867 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:33:58.138486 1072867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:33:58.138497 1072867 out.go:309] Setting ErrFile to fd 2...
	I1002 21:33:58.138503 1072867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:33:58.138800 1072867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:33:58.139178 1072867 out.go:303] Setting JSON to false
	I1002 21:33:58.140300 1072867 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15386,"bootTime":1696267053,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:33:58.140372 1072867 start.go:138] virtualization:  
	I1002 21:33:58.142779 1072867 out.go:177] * [functional-277432] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 21:33:58.144901 1072867 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 21:33:58.146643 1072867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:33:58.145093 1072867 notify.go:220] Checking for updates...
	I1002 21:33:58.150629 1072867 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:33:58.152631 1072867 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:33:58.154950 1072867 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:33:58.156994 1072867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:33:58.159306 1072867 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:33:58.159840 1072867 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:33:58.184181 1072867 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:33:58.184277 1072867 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:33:58.266778 1072867 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-02 21:33:58.255786191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:33:58.266890 1072867 docker.go:294] overlay module found
	I1002 21:33:58.269223 1072867 out.go:177] * Using the docker driver based on existing profile
	I1002 21:33:58.271188 1072867 start.go:298] selected driver: docker
	I1002 21:33:58.271209 1072867 start.go:902] validating driver "docker" against &{Name:functional-277432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-277432 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:33:58.271325 1072867 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:33:58.274229 1072867 out.go:177] 
	W1002 21:33:58.276153 1072867 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 21:33:58.278115 1072867 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-277432 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-277432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-277432 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.884067ms)

                                                
                                                
-- stdout --
	* [functional-277432] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:33:57.931632 1072826 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:33:57.931796 1072826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:33:57.931803 1072826 out.go:309] Setting ErrFile to fd 2...
	I1002 21:33:57.931809 1072826 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:33:57.932155 1072826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:33:57.932495 1072826 out.go:303] Setting JSON to false
	I1002 21:33:57.934041 1072826 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15385,"bootTime":1696267053,"procs":335,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 21:33:57.934122 1072826 start.go:138] virtualization:  
	I1002 21:33:57.936826 1072826 out.go:177] * [functional-277432] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1002 21:33:57.939683 1072826 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 21:33:57.941631 1072826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:33:57.939891 1072826 notify.go:220] Checking for updates...
	I1002 21:33:57.945498 1072826 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 21:33:57.947757 1072826 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 21:33:57.949733 1072826 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 21:33:57.951906 1072826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:33:57.954481 1072826 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:33:57.955212 1072826 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 21:33:57.981495 1072826 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 21:33:57.981607 1072826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:33:58.076095 1072826 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-02 21:33:58.065505014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:33:58.076201 1072826 docker.go:294] overlay module found
	I1002 21:33:58.078438 1072826 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:33:58.080357 1072826 start.go:298] selected driver: docker
	I1002 21:33:58.080374 1072826 start.go:902] validating driver "docker" against &{Name:functional-277432 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-277432 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 21:33:58.080498 1072826 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:33:58.082735 1072826 out.go:177] 
	W1002 21:33:58.084531 1072826 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:33:58.086258 1072826 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-277432 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-277432 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-dk6n6" [493b5530-ba3c-4cc7-96ab-b1648f47493d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-dk6n6" [493b5530-ba3c-4cc7-96ab-b1648f47493d] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.018359716s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30871
functional_test.go:1674: http://192.168.49.2:30871: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-dk6n6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30871
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6ebb0d19-2828-408a-aa14-eb5dc21cc114] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.035728428s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-277432 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-277432 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-277432 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-277432 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [573ca3a0-72a8-4d21-8506-e20c288e9319] Pending
helpers_test.go:344: "sp-pod" [573ca3a0-72a8-4d21-8506-e20c288e9319] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [573ca3a0-72a8-4d21-8506-e20c288e9319] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.012812396s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-277432 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-277432 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-277432 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d19937d1-ecf3-4c38-b804-8c6bff4ce97c] Pending
helpers_test.go:344: "sp-pod" [d19937d1-ecf3-4c38-b804-8c6bff4ce97c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d19937d1-ecf3-4c38-b804-8c6bff4ce97c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.01773153s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-277432 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh -n functional-277432 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 cp functional-277432:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1483777457/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh -n functional-277432 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1047732/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo cat /etc/test/nested/copy/1047732/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1047732.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo cat /etc/ssl/certs/1047732.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1047732.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo cat /usr/share/ca-certificates/1047732.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/10477322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo cat /etc/ssl/certs/10477322.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/10477322.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo cat /usr/share/ca-certificates/10477322.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-277432 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "sudo systemctl is-active docker": exit status 1 (293.913168ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "sudo systemctl is-active containerd": exit status 1 (308.736375ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-277432 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-277432 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-277432 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-277432 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1070851: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-277432 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-277432 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d0ab20a6-f56d-4238-b276-67d895d90aa9] Pending
helpers_test.go:344: "nginx-svc" [d0ab20a6-f56d-4238-b276-67d895d90aa9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1002 21:33:30.691532 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [d0ab20a6-f56d-4238-b276-67d895d90aa9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.031780312s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-277432 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.30.164 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-277432 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-277432 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-277432 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-p9bsf" [d177f25c-29be-4ad2-bb5b-93734ec753cc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-p9bsf" [d177f25c-29be-4ad2-bb5b-93734ec753cc] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.017189526s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "350.119817ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "56.722473ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "346.917091ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "61.330272ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdany-port1329648494/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696282433121314619" to /tmp/TestFunctionalparallelMountCmdany-port1329648494/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696282433121314619" to /tmp/TestFunctionalparallelMountCmdany-port1329648494/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696282433121314619" to /tmp/TestFunctionalparallelMountCmdany-port1329648494/001/test-1696282433121314619
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (450.929977ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 21:33 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 21:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 21:33 test-1696282433121314619
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh cat /mount-9p/test-1696282433121314619
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-277432 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1c75ebbe-022a-49ce-bfe1-a57411c30767] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1c75ebbe-022a-49ce-bfe1-a57411c30767] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1c75ebbe-022a-49ce-bfe1-a57411c30767] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.028985571s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-277432 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdany-port1329648494/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 service list -o json
functional_test.go:1493: Took "652.444845ms" to run "out/minikube-linux-arm64 -p functional-277432 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30244
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30244
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-277432 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-277432
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-277432 image ls --format short --alsologtostderr:
I1002 21:34:31.218175 1075647 out.go:296] Setting OutFile to fd 1 ...
I1002 21:34:31.218416 1075647 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.218428 1075647 out.go:309] Setting ErrFile to fd 2...
I1002 21:34:31.218434 1075647 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.218692 1075647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
I1002 21:34:31.219306 1075647 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.219455 1075647 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.219927 1075647 cli_runner.go:164] Run: docker container inspect functional-277432 --format={{.State.Status}}
I1002 21:34:31.241992 1075647 ssh_runner.go:195] Run: systemctl --version
I1002 21:34:31.242042 1075647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-277432
I1002 21:34:31.265411 1075647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/functional-277432/id_rsa Username:docker}
I1002 21:34:31.364948 1075647 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-277432 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | 30bb499447fe1 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 89d57b83c1786 | 117MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| gcr.io/google-containers/addon-resizer  | functional-277432  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/library/nginx                 | alpine             | df8fd1ca35d66 | 45.3MB |
| docker.io/library/nginx                 | latest             | 2a4fbb36e9660 | 196MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy              | v1.28.2            | 7da62c127fc0f | 69.9MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 64fc40cee3716 | 59.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-277432 image ls --format table --alsologtostderr:
I1002 21:34:31.816924 1075778 out.go:296] Setting OutFile to fd 1 ...
I1002 21:34:31.817085 1075778 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.817091 1075778 out.go:309] Setting ErrFile to fd 2...
I1002 21:34:31.817096 1075778 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.817445 1075778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
I1002 21:34:31.818193 1075778 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.818350 1075778 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.818936 1075778 cli_runner.go:164] Run: docker container inspect functional-277432 --format={{.State.Status}}
I1002 21:34:31.841154 1075778 ssh_runner.go:195] Run: systemctl --version
I1002 21:34:31.841221 1075778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-277432
I1002 21:34:31.861485 1075778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/functional-277432/id_rsa Username:docker}
I1002 21:34:31.958958 1075778 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-277432 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":["docker.
io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef","docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45331256"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f
5dbb774d6503f5039d7","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"59188020"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":
["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf","registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"69926807"},{"id":"ff
d4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-277432"],"size":"34114467"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"121054158"},
{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196620"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64
816c5c15bf2f002c9238ce0a4ac22b5c8","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"117187380"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-277432 image ls --format json --alsologtostderr:
I1002 21:34:31.518196 1075707 out.go:296] Setting OutFile to fd 1 ...
I1002 21:34:31.518515 1075707 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.518545 1075707 out.go:309] Setting ErrFile to fd 2...
I1002 21:34:31.518565 1075707 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.518901 1075707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
I1002 21:34:31.519617 1075707 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.519793 1075707 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.520366 1075707 cli_runner.go:164] Run: docker container inspect functional-277432 --format={{.State.Status}}
I1002 21:34:31.548952 1075707 ssh_runner.go:195] Run: systemctl --version
I1002 21:34:31.549003 1075707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-277432
I1002 21:34:31.573876 1075707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/functional-277432/id_rsa Username:docker}
I1002 21:34:31.675404 1075707 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-277432 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-277432
size: "34114467"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324
repoTags:
- docker.io/library/nginx:latest
size: "196196620"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
- registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "69926807"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "121054158"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "117187380"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests:
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
- docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003
repoTags:
- docker.io/library/nginx:alpine
size: "45331256"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "59188020"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-277432 image ls --format yaml --alsologtostderr:
I1002 21:34:31.211896 1075648 out.go:296] Setting OutFile to fd 1 ...
I1002 21:34:31.212222 1075648 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.212253 1075648 out.go:309] Setting ErrFile to fd 2...
I1002 21:34:31.212274 1075648 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.212559 1075648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
I1002 21:34:31.213273 1075648 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.213461 1075648 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.214111 1075648 cli_runner.go:164] Run: docker container inspect functional-277432 --format={{.State.Status}}
I1002 21:34:31.234905 1075648 ssh_runner.go:195] Run: systemctl --version
I1002 21:34:31.234956 1075648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-277432
I1002 21:34:31.261901 1075648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/functional-277432/id_rsa Username:docker}
I1002 21:34:31.360671 1075648 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh pgrep buildkitd: exit status 1 (378.126521ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image build -t localhost/my-image:functional-277432 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 image build -t localhost/my-image:functional-277432 testdata/build --alsologtostderr: (2.379320142s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-277432 image build -t localhost/my-image:functional-277432 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2af9b80177e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-277432
--> 3e10559515a
Successfully tagged localhost/my-image:functional-277432
3e10559515aa4bcaaca73a479715ce08a0f0612264b843f95cc93d407756aa9c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-277432 image build -t localhost/my-image:functional-277432 testdata/build --alsologtostderr:
I1002 21:34:31.881014 1075786 out.go:296] Setting OutFile to fd 1 ...
I1002 21:34:31.881938 1075786 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.881984 1075786 out.go:309] Setting ErrFile to fd 2...
I1002 21:34:31.882005 1075786 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 21:34:31.882342 1075786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
I1002 21:34:31.883548 1075786 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.884371 1075786 config.go:182] Loaded profile config "functional-277432": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1002 21:34:31.884962 1075786 cli_runner.go:164] Run: docker container inspect functional-277432 --format={{.State.Status}}
I1002 21:34:31.910682 1075786 ssh_runner.go:195] Run: systemctl --version
I1002 21:34:31.910746 1075786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-277432
I1002 21:34:31.929467 1075786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/functional-277432/id_rsa Username:docker}
I1002 21:34:32.023881 1075786 build_images.go:151] Building image from path: /tmp/build.836974904.tar
I1002 21:34:32.023941 1075786 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 21:34:32.037900 1075786 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.836974904.tar
I1002 21:34:32.042659 1075786 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.836974904.tar: stat -c "%s %y" /var/lib/minikube/build/build.836974904.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.836974904.tar': No such file or directory
I1002 21:34:32.042694 1075786 ssh_runner.go:362] scp /tmp/build.836974904.tar --> /var/lib/minikube/build/build.836974904.tar (3072 bytes)
I1002 21:34:32.074737 1075786 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.836974904
I1002 21:34:32.085709 1075786 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.836974904 -xf /var/lib/minikube/build/build.836974904.tar
I1002 21:34:32.097249 1075786 crio.go:297] Building image: /var/lib/minikube/build/build.836974904
I1002 21:34:32.097347 1075786 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-277432 /var/lib/minikube/build/build.836974904 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1002 21:34:34.156563 1075786 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-277432 /var/lib/minikube/build/build.836974904 --cgroup-manager=cgroupfs: (2.059186102s)
I1002 21:34:34.156674 1075786 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.836974904
I1002 21:34:34.170519 1075786 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.836974904.tar
I1002 21:34:34.181232 1075786 build_images.go:207] Built localhost/my-image:functional-277432 from /tmp/build.836974904.tar
I1002 21:34:34.181261 1075786 build_images.go:123] succeeded building to: functional-277432
I1002 21:34:34.181265 1075786 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.782210262s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-277432
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image load --daemon gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 image load --daemon gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr: (4.259183746s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image load --daemon gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 image load --daemon gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr: (3.63584363s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227540808/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227540808/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227540808/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T" /mount1: exit status 1 (725.872049ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-277432 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227540808/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227540808/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-277432 /tmp/TestFunctionalparallelMountCmdVerifyCleanup227540808/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.570687213s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-277432
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image load --daemon gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 image load --daemon gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr: (3.899243119s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image save gcr.io/google-containers/addon-resizer:functional-277432 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image rm gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-277432 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.024862067s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-277432
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-277432 image save --daemon gcr.io/google-containers/addon-resizer:functional-277432 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-277432
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-277432
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-277432
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-277432
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (88.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-420597 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1002 21:35:46.833282 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-420597 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m28.439972067s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (88.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-420597 addons enable ingress --alsologtostderr -v=5
E1002 21:36:14.531742 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-420597 addons enable ingress --alsologtostderr -v=5: (12.602979033s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.74s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-420597 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-926591 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1002 21:39:48.793777 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-926591 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m16.414100407s)
--- PASS: TestJSONOutput/start/Command (76.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-926591 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-926591 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-926591 --output=json --user=testUser
E1002 21:40:46.832471 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-926591 --output=json --user=testUser: (5.907317124s)
--- PASS: TestJSONOutput/stop/Command (5.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-029677 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-029677 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.476553ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e2ff4aee-d7da-4988-a129-9409995de798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-029677] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d58af8a-dd86-4fae-bfc3-f72145915ce4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17323"}}
	{"specversion":"1.0","id":"8faa48a3-f98f-48b4-bcaa-8fdfe11c96df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d3636ba-e21a-4f52-ac38-b38345c0f114","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig"}}
	{"specversion":"1.0","id":"00d97698-ac40-4c4f-aa84-f4ee96eb9a22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube"}}
	{"specversion":"1.0","id":"e1f669ba-666f-46ed-8204-d023d894dab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"aa238df8-634c-4ab1-a191-2a4406f9b212","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"618c9a91-84af-407c-b8bb-7cbeaa76a5fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-029677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-029677
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-161765 --network=
E1002 21:41:10.714525 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:41:19.181583 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:19.187126 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:19.197414 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:19.217700 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:19.257968 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:19.338299 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:19.498679 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:19.819173 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:20.460068 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:21.740305 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:24.300788 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:41:29.421391 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-161765 --network=: (40.584701799s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-161765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-161765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-161765: (2.081500923s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-507762 --network=bridge
E1002 21:41:39.662377 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:42:00.143368 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-507762 --network=bridge: (30.108763027s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-507762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-507762
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-507762: (1.945419908s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.08s)

                                                
                                    
x
+
TestKicExistingNetwork (34.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-972878 --network=existing-network
E1002 21:42:41.103591 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-972878 --network=existing-network: (32.156545913s)
helpers_test.go:175: Cleaning up "existing-network-972878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-972878
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-972878: (1.97393473s)
--- PASS: TestKicExistingNetwork (34.29s)

                                                
                                    
x
+
TestKicCustomSubnet (35.27s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-178473 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-178473 --subnet=192.168.60.0/24: (33.080786024s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-178473 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-178473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-178473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-178473: (2.164345446s)
--- PASS: TestKicCustomSubnet (35.27s)

                                                
                                    
x
+
TestKicStaticIP (33.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-943995 --static-ip=192.168.200.200
E1002 21:43:26.868686 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-943995 --static-ip=192.168.200.200: (31.289636492s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-943995 ip
helpers_test.go:175: Cleaning up "static-ip-943995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-943995
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-943995: (2.12916504s)
--- PASS: TestKicStaticIP (33.58s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-462230 --driver=docker  --container-runtime=crio
E1002 21:43:54.554738 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 21:44:03.025403 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-462230 --driver=docker  --container-runtime=crio: (32.691221433s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-465061 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-465061 --driver=docker  --container-runtime=crio: (33.436519353s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-462230
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-465061
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-465061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-465061
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-465061: (2.252097228s)
helpers_test.go:175: Cleaning up "first-462230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-462230
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-462230: (1.955694859s)
--- PASS: TestMinikubeProfile (71.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-970317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-970317 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.659965398s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-970317 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-972057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-972057 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.136427243s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-972057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-970317 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-970317 --alsologtostderr -v=5: (1.697172345s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-972057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-972057
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-972057: (1.229828109s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-972057
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-972057: (6.756531742s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-972057 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-629060 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1002 21:45:46.833024 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 21:46:19.181224 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:46:46.866171 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 21:47:09.891958 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-629060 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.043818985s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-629060 -- rollout status deployment/busybox: (3.758257536s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-rpjdg -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-wcgsg -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-rpjdg -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-wcgsg -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-rpjdg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-629060 -- exec busybox-5bc68d56bd-wcgsg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-629060 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-629060 -v 3 --alsologtostderr: (59.17919849s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.91s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp testdata/cp-test.txt multinode-629060:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1626600241/001/cp-test_multinode-629060.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060:/home/docker/cp-test.txt multinode-629060-m02:/home/docker/cp-test_multinode-629060_multinode-629060-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m02 "sudo cat /home/docker/cp-test_multinode-629060_multinode-629060-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060:/home/docker/cp-test.txt multinode-629060-m03:/home/docker/cp-test_multinode-629060_multinode-629060-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m03 "sudo cat /home/docker/cp-test_multinode-629060_multinode-629060-m03.txt"
E1002 21:48:26.868248 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp testdata/cp-test.txt multinode-629060-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1626600241/001/cp-test_multinode-629060-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060-m02:/home/docker/cp-test.txt multinode-629060:/home/docker/cp-test_multinode-629060-m02_multinode-629060.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060 "sudo cat /home/docker/cp-test_multinode-629060-m02_multinode-629060.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060-m02:/home/docker/cp-test.txt multinode-629060-m03:/home/docker/cp-test_multinode-629060-m02_multinode-629060-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m03 "sudo cat /home/docker/cp-test_multinode-629060-m02_multinode-629060-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp testdata/cp-test.txt multinode-629060-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1626600241/001/cp-test_multinode-629060-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060-m03:/home/docker/cp-test.txt multinode-629060:/home/docker/cp-test_multinode-629060-m03_multinode-629060.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060 "sudo cat /home/docker/cp-test_multinode-629060-m03_multinode-629060.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 cp multinode-629060-m03:/home/docker/cp-test.txt multinode-629060-m02:/home/docker/cp-test_multinode-629060-m03_multinode-629060-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 ssh -n multinode-629060-m02 "sudo cat /home/docker/cp-test_multinode-629060-m03_multinode-629060-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-629060 node stop m03: (1.22959798s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-629060 status: exit status 7 (565.062661ms)

                                                
                                                
-- stdout --
	multinode-629060
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629060-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629060-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr: exit status 7 (559.373758ms)

                                                
                                                
-- stdout --
	multinode-629060
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629060-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629060-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:48:35.623306 1122309 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:48:35.623491 1122309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:48:35.623501 1122309 out.go:309] Setting ErrFile to fd 2...
	I1002 21:48:35.623507 1122309 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:48:35.623764 1122309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:48:35.623942 1122309 out.go:303] Setting JSON to false
	I1002 21:48:35.624015 1122309 mustload.go:65] Loading cluster: multinode-629060
	I1002 21:48:35.624124 1122309 notify.go:220] Checking for updates...
	I1002 21:48:35.624428 1122309 config.go:182] Loaded profile config "multinode-629060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:48:35.624439 1122309 status.go:255] checking status of multinode-629060 ...
	I1002 21:48:35.625720 1122309 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:48:35.646609 1122309 status.go:330] multinode-629060 host status = "Running" (err=<nil>)
	I1002 21:48:35.646633 1122309 host.go:66] Checking if "multinode-629060" exists ...
	I1002 21:48:35.646965 1122309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060
	I1002 21:48:35.668786 1122309 host.go:66] Checking if "multinode-629060" exists ...
	I1002 21:48:35.669128 1122309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:48:35.669183 1122309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060
	I1002 21:48:35.693378 1122309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33810 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060/id_rsa Username:docker}
	I1002 21:48:35.792078 1122309 ssh_runner.go:195] Run: systemctl --version
	I1002 21:48:35.797715 1122309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:48:35.811510 1122309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:48:35.884143 1122309 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-02 21:48:35.873965775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 21:48:35.884758 1122309 kubeconfig.go:92] found "multinode-629060" server: "https://192.168.58.2:8443"
	I1002 21:48:35.884795 1122309 api_server.go:166] Checking apiserver status ...
	I1002 21:48:35.884843 1122309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:48:35.897963 1122309 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1270/cgroup
	I1002 21:48:35.909515 1122309 api_server.go:182] apiserver freezer: "8:freezer:/docker/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/crio/crio-6a91dc91514ef1c56dc60de2ecfc5b23d0feda183302e5ec965af2d512c960c0"
	I1002 21:48:35.909583 1122309 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a49cd6d49abe64c6c2fef94211467ca6fd68de0ad097cc27b1a8202b7d0f8e33/crio/crio-6a91dc91514ef1c56dc60de2ecfc5b23d0feda183302e5ec965af2d512c960c0/freezer.state
	I1002 21:48:35.920122 1122309 api_server.go:204] freezer state: "THAWED"
	I1002 21:48:35.920157 1122309 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1002 21:48:35.930665 1122309 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1002 21:48:35.930693 1122309 status.go:421] multinode-629060 apiserver status = Running (err=<nil>)
	I1002 21:48:35.930704 1122309 status.go:257] multinode-629060 status: &{Name:multinode-629060 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:48:35.930721 1122309 status.go:255] checking status of multinode-629060-m02 ...
	I1002 21:48:35.931042 1122309 cli_runner.go:164] Run: docker container inspect multinode-629060-m02 --format={{.State.Status}}
	I1002 21:48:35.948831 1122309 status.go:330] multinode-629060-m02 host status = "Running" (err=<nil>)
	I1002 21:48:35.948855 1122309 host.go:66] Checking if "multinode-629060-m02" exists ...
	I1002 21:48:35.949147 1122309 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-629060-m02
	I1002 21:48:35.967196 1122309 host.go:66] Checking if "multinode-629060-m02" exists ...
	I1002 21:48:35.967505 1122309 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:48:35.967558 1122309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-629060-m02
	I1002 21:48:35.985553 1122309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33815 SSHKeyPath:/home/jenkins/minikube-integration/17323-1042317/.minikube/machines/multinode-629060-m02/id_rsa Username:docker}
	I1002 21:48:36.088220 1122309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:48:36.102542 1122309 status.go:257] multinode-629060-m02 status: &{Name:multinode-629060-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:48:36.102578 1122309 status.go:255] checking status of multinode-629060-m03 ...
	I1002 21:48:36.102888 1122309 cli_runner.go:164] Run: docker container inspect multinode-629060-m03 --format={{.State.Status}}
	I1002 21:48:36.121521 1122309 status.go:330] multinode-629060-m03 host status = "Stopped" (err=<nil>)
	I1002 21:48:36.121546 1122309 status.go:343] host is not running, skipping remaining checks
	I1002 21:48:36.121555 1122309 status.go:257] multinode-629060-m03 status: &{Name:multinode-629060-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-629060 node start m03 --alsologtostderr: (12.347743017s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-629060
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-629060
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-629060: (25.057115779s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-629060 --wait=true -v=8 --alsologtostderr
E1002 21:50:46.832338 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-629060 --wait=true -v=8 --alsologtostderr: (1m39.37100047s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-629060
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-629060 node delete m03: (4.357766075s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 stop
E1002 21:51:19.180282 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-629060 stop: (23.887978157s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-629060 status: exit status 7 (90.265224ms)

                                                
                                                
-- stdout --
	multinode-629060
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-629060-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr: exit status 7 (90.698083ms)

                                                
                                                
-- stdout --
	multinode-629060
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-629060-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:51:23.003118 1130402 out.go:296] Setting OutFile to fd 1 ...
	I1002 21:51:23.003337 1130402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:23.003346 1130402 out.go:309] Setting ErrFile to fd 2...
	I1002 21:51:23.003352 1130402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 21:51:23.003663 1130402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 21:51:23.003874 1130402 out.go:303] Setting JSON to false
	I1002 21:51:23.003996 1130402 mustload.go:65] Loading cluster: multinode-629060
	I1002 21:51:23.004468 1130402 config.go:182] Loaded profile config "multinode-629060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 21:51:23.004493 1130402 status.go:255] checking status of multinode-629060 ...
	I1002 21:51:23.005016 1130402 cli_runner.go:164] Run: docker container inspect multinode-629060 --format={{.State.Status}}
	I1002 21:51:23.005769 1130402 notify.go:220] Checking for updates...
	I1002 21:51:23.026772 1130402 status.go:330] multinode-629060 host status = "Stopped" (err=<nil>)
	I1002 21:51:23.026797 1130402 status.go:343] host is not running, skipping remaining checks
	I1002 21:51:23.026805 1130402 status.go:257] multinode-629060 status: &{Name:multinode-629060 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 21:51:23.026920 1130402 status.go:255] checking status of multinode-629060-m02 ...
	I1002 21:51:23.027235 1130402 cli_runner.go:164] Run: docker container inspect multinode-629060-m02 --format={{.State.Status}}
	I1002 21:51:23.045397 1130402 status.go:330] multinode-629060-m02 host status = "Stopped" (err=<nil>)
	I1002 21:51:23.045422 1130402 status.go:343] host is not running, skipping remaining checks
	I1002 21:51:23.045429 1130402 status.go:257] multinode-629060-m02 status: &{Name:multinode-629060-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-629060 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-629060 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.82652311s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-629060 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.61s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-629060
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-629060-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-629060-m02 --driver=docker  --container-runtime=crio: exit status 14 (75.481483ms)

                                                
                                                
-- stdout --
	* [multinode-629060-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-629060-m02' is duplicated with machine name 'multinode-629060-m02' in profile 'multinode-629060'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-629060-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-629060-m03 --driver=docker  --container-runtime=crio: (30.55104041s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-629060
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-629060: exit status 80 (377.952116ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-629060
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-629060-m03 already exists in multinode-629060-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-629060-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-629060-m03: (2.021406627s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.08s)

                                                
                                    
x
+
TestPreload (147.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-673079 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1002 21:53:26.869148 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-673079 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.343578963s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-673079 image pull gcr.io/k8s-minikube/busybox
E1002 21:54:49.915464 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-673079 image pull gcr.io/k8s-minikube/busybox: (2.129728966s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-673079
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-673079: (5.836767099s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-673079 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1002 21:55:46.833053 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-673079 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.801685804s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-673079 image list
helpers_test.go:175: Cleaning up "test-preload-673079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-673079
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-673079: (2.400724645s)
--- PASS: TestPreload (147.77s)

                                                
                                    
x
+
TestScheduledStopUnix (111.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-908756 --memory=2048 --driver=docker  --container-runtime=crio
E1002 21:56:19.181163 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-908756 --memory=2048 --driver=docker  --container-runtime=crio: (34.865706412s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908756 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-908756 -n scheduled-stop-908756
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908756 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-908756 -n scheduled-stop-908756
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-908756
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-908756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-908756
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-908756: exit status 7 (67.896381ms)

                                                
                                                
-- stdout --
	scheduled-stop-908756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-908756 -n scheduled-stop-908756
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-908756 -n scheduled-stop-908756: exit status 7 (67.570229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-908756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-908756
E1002 21:57:42.226486 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-908756: (4.489372597s)
--- PASS: TestScheduledStopUnix (111.04s)

                                                
                                    
x
+
TestInsufficientStorage (13.29s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-768004 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-768004 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.715735128s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d1daa4e-3d3a-452f-8a37-89eb075b6749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-768004] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eaad322d-98e5-4956-9516-05d7318ca54b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17323"}}
	{"specversion":"1.0","id":"f752e42c-d1d8-4253-b3bb-0c1d16131184","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"592e5177-8786-40c4-97de-32388a88f696","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig"}}
	{"specversion":"1.0","id":"2db112e9-6609-4d5e-8c0a-b8886686df47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube"}}
	{"specversion":"1.0","id":"8295fe86-474b-4802-88ee-4ef390a059f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f99a428f-900b-4179-9a6c-2bc54cb1cd55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a01a7b90-536d-44f0-9084-c02c6895fbd3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"03e49f69-17da-488f-bda1-ced0220013a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"074ba2a9-4dba-442a-b8f4-36c146045ab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"db826889-8761-478f-b6d2-21f6a8fea98d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"541031b5-e00c-4e2b-9fdf-91106732f08e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-768004 in cluster insufficient-storage-768004","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"94a1c6bc-a6fd-49eb-9ae6-60534dffded2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e93b352-64f5-4b19-9032-4bfb642c2400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f7e1629-e2e6-4b67-a33a-7ef2a3d23c08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-768004 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-768004 --output=json --layout=cluster: exit status 7 (320.168397ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-768004","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-768004","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:57:55.026617 1147342 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-768004" does not appear in /home/jenkins/minikube-integration/17323-1042317/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-768004 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-768004 --output=json --layout=cluster: exit status 7 (307.206213ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-768004","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-768004","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:57:55.332852 1147395 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-768004" does not appear in /home/jenkins/minikube-integration/17323-1042317/kubeconfig
	E1002 21:57:55.345465 1147395 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/insufficient-storage-768004/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-768004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-768004
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-768004: (1.950194139s)
--- PASS: TestInsufficientStorage (13.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (378.23s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 22:00:46.833173 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m0.395116615s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-573624
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-573624: (1.333469993s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-573624 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-573624 status --format={{.Host}}: exit status 7 (70.138747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1002 22:01:19.180509 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 22:03:26.868384 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 22:03:49.892160 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m45.07057525s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-573624 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (106.228794ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-573624] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-573624
	    minikube start -p kubernetes-upgrade-573624 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5736242 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-573624 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-573624 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.067490844s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-573624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-573624
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-573624: (3.06411401s)
--- PASS: TestKubernetesUpgrade (378.23s)

                                                
                                    
x
+
TestPause/serial/Start (87.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-050274 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-050274 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m27.420683334s)
--- PASS: TestPause/serial/Start (87.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-283217
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-718113 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-718113 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (86.884402ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-718113] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-718113 --driver=docker  --container-runtime=crio
E1002 22:08:26.868772 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-718113 --driver=docker  --container-runtime=crio: (33.930074649s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-718113 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-718113 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-718113 --no-kubernetes --driver=docker  --container-runtime=crio: (4.664397168s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-718113 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-718113 status -o json: exit status 2 (343.47743ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-718113","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-718113
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-718113: (1.984302744s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-718113 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-718113 --no-kubernetes --driver=docker  --container-runtime=crio: (8.873219244s)
--- PASS: TestNoKubernetes/serial/Start (8.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-718113 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-718113 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.123079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-718113
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-718113: (1.248945301s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-718113 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-718113 --driver=docker  --container-runtime=crio: (6.938946145s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-718113 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-718113 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.651919ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-820473 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-820473 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (229.776626ms)

                                                
                                                
-- stdout --
	* [false-820473] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17323
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 22:09:51.327444 1193759 out.go:296] Setting OutFile to fd 1 ...
	I1002 22:09:51.327704 1193759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:09:51.327754 1193759 out.go:309] Setting ErrFile to fd 2...
	I1002 22:09:51.327781 1193759 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 22:09:51.328105 1193759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17323-1042317/.minikube/bin
	I1002 22:09:51.328545 1193759 out.go:303] Setting JSON to false
	I1002 22:09:51.329618 1193759 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17539,"bootTime":1696267053,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 22:09:51.329720 1193759 start.go:138] virtualization:  
	I1002 22:09:51.332465 1193759 out.go:177] * [false-820473] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1002 22:09:51.334649 1193759 out.go:177]   - MINIKUBE_LOCATION=17323
	I1002 22:09:51.336499 1193759 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 22:09:51.334806 1193759 notify.go:220] Checking for updates...
	I1002 22:09:51.340016 1193759 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17323-1042317/kubeconfig
	I1002 22:09:51.341574 1193759 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17323-1042317/.minikube
	I1002 22:09:51.343394 1193759 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 22:09:51.345283 1193759 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 22:09:51.347642 1193759 config.go:182] Loaded profile config "cert-expiration-474773": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1002 22:09:51.347807 1193759 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 22:09:51.383563 1193759 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I1002 22:09:51.383660 1193759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 22:09:51.489907 1193759 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-02 22:09:51.476796582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1045-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1002 22:09:51.490019 1193759 docker.go:294] overlay module found
	I1002 22:09:51.492260 1193759 out.go:177] * Using the docker driver based on user configuration
	I1002 22:09:51.494715 1193759 start.go:298] selected driver: docker
	I1002 22:09:51.494739 1193759 start.go:902] validating driver "docker" against <nil>
	I1002 22:09:51.494753 1193759 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 22:09:51.497171 1193759 out.go:177] 
	W1002 22:09:51.499693 1193759 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1002 22:09:51.502211 1193759 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-820473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-820473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Oct 2023 22:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-474773
contexts:
- context:
cluster: cert-expiration-474773
extensions:
- extension:
last-update: Mon, 02 Oct 2023 22:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-474773
name: cert-expiration-474773
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-474773
user:
client-certificate: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/cert-expiration-474773/client.crt
client-key: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/cert-expiration-474773/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-820473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-820473"

                                                
                                                
----------------------- debugLogs end: false-820473 [took: 3.319692005s] --------------------------------
helpers_test.go:175: Cleaning up "false-820473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-820473
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (113.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-456332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1002 22:10:46.833166 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-456332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m53.989057199s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (113.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-957566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-957566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m2.286424466s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-456332 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [743eccf3-08d9-4daa-984b-4ac1f6e9ec49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [743eccf3-08d9-4daa-984b-4ac1f6e9ec49] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.04171488s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-456332 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-456332 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-456332 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-456332 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-456332 --alsologtostderr -v=3: (12.219180513s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-957566 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f7e4c969-cf23-4945-8f0e-8d1bcb45bd21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f7e4c969-cf23-4945-8f0e-8d1bcb45bd21] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.026958174s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-957566 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-957566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-957566 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.412892059s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-957566 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-456332 -n old-k8s-version-456332
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-456332 -n old-k8s-version-456332: exit status 7 (81.246238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-456332 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (443.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-456332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-456332 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m22.594339894s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-456332 -n old-k8s-version-456332
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (443.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-957566 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-957566 --alsologtostderr -v=3: (12.334504418s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-957566 -n no-preload-957566
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-957566 -n no-preload-957566: exit status 7 (97.856537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-957566 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (631.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-957566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 22:13:26.868175 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 22:14:22.226981 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 22:15:46.833224 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 22:16:19.180806 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 22:18:26.868396 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-957566 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (10m30.766760465s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-957566 -n no-preload-957566
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (631.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gqldq" [8cb305fa-1fa9-4143-97f0-708740e55d06] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027404101s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gqldq" [8cb305fa-1fa9-4143-97f0-708740e55d06] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010386675s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-456332 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-456332 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-456332 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-456332 -n old-k8s-version-456332
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-456332 -n old-k8s-version-456332: exit status 2 (390.138435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-456332 -n old-k8s-version-456332
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-456332 -n old-k8s-version-456332: exit status 2 (367.207598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-456332 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-456332 -n old-k8s-version-456332
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-456332 -n old-k8s-version-456332
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-009533 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 22:20:46.832522 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 22:21:19.180539 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-009533 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (51.577962859s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-009533 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [688d3f15-86d7-41c7-bb83-b611e3c12342] Pending
helpers_test.go:344: "busybox" [688d3f15-86d7-41c7-bb83-b611e3c12342] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [688d3f15-86d7-41c7-bb83-b611e3c12342] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.034072513s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-009533 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-009533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-009533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.102403259s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-009533 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-009533 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-009533 --alsologtostderr -v=3: (12.230579947s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-009533 -n embed-certs-009533
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-009533 -n embed-certs-009533: exit status 7 (90.17354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-009533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (354.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-009533 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 22:22:26.852391 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:26.858054 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:26.868422 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:26.888667 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:26.929047 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:27.009369 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:27.169738 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:27.490269 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:28.131026 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:29.411496 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:31.971903 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:37.092979 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:22:47.333865 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:23:07.814409 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:23:26.868435 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-009533 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m54.281471785s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-009533 -n embed-certs-009533
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (354.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-btx4f" [39f971a0-b433-410a-9994-4c2ca639814c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.028843858s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-btx4f" [39f971a0-b433-410a-9994-4c2ca639814c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011049312s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-957566 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-957566 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-957566 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-957566 -n no-preload-957566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-957566 -n no-preload-957566: exit status 2 (391.70357ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-957566 -n no-preload-957566
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-957566 -n no-preload-957566: exit status 2 (362.403201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-957566 --alsologtostderr -v=1
E1002 22:23:48.774611 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-957566 -n no-preload-957566
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-957566 -n no-preload-957566
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-998594 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 22:25:10.695137 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-998594 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m22.023216275s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-998594 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [987549d8-3551-4129-a70c-7301a8eb7a99] Pending
helpers_test.go:344: "busybox" [987549d8-3551-4129-a70c-7301a8eb7a99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [987549d8-3551-4129-a70c-7301a8eb7a99] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.032076128s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-998594 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-998594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-998594 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.161301321s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-998594 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-998594 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-998594 --alsologtostderr -v=3: (12.117340582s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594: exit status 7 (80.147264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-998594 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (345.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-998594 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 22:25:46.832783 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
E1002 22:26:19.180894 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 22:27:26.852408 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-998594 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m44.511136833s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (345.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dhhbd" [65f12ef8-d799-473d-a2b5-60ed9eb5e748] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1002 22:27:41.107581 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:41.112806 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:41.123004 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:41.143294 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:41.184017 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:41.264490 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:41.425174 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:41.746018 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:42.386642 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:43.667777 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dhhbd" [65f12ef8-d799-473d-a2b5-60ed9eb5e748] Running
E1002 22:27:46.228199 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.032867192s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dhhbd" [65f12ef8-d799-473d-a2b5-60ed9eb5e748] Running
E1002 22:27:51.349117 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:27:54.535530 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010948508s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-009533 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-009533 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-009533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-009533 -n embed-certs-009533
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-009533 -n embed-certs-009533: exit status 2 (348.529198ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-009533 -n embed-certs-009533
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-009533 -n embed-certs-009533: exit status 2 (363.547027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-009533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-009533 -n embed-certs-009533
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-009533 -n embed-certs-009533
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-157822 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 22:28:09.916388 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
E1002 22:28:22.069814 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
E1002 22:28:26.869067 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/functional-277432/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-157822 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (42.785956645s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-157822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-157822 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.213776544s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-157822 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-157822 --alsologtostderr -v=3: (1.278568704s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-157822 -n newest-cni-157822
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-157822 -n newest-cni-157822: exit status 7 (67.73691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-157822 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-157822 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1002 22:29:03.030590 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-157822 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (30.658302769s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-157822 -n newest-cni-157822
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-157822 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-157822 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-157822 -n newest-cni-157822
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-157822 -n newest-cni-157822: exit status 2 (392.942727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-157822 -n newest-cni-157822
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-157822 -n newest-cni-157822: exit status 2 (370.046731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-157822 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-157822 -n newest-cni-157822
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-157822 -n newest-cni-157822
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1002 22:30:24.950848 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m18.383783662s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-820473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-820473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s47ht" [a606be33-9949-42b7-985c-512289de7042] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:30:46.833199 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/addons-598993/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-s47ht" [a606be33-9949-42b7-985c-512289de7042] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.012322647s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-820473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m28.991237298s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wv7b4" [eff1128e-e264-4433-aaf7-be84439e3b96] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wv7b4" [eff1128e-e264-4433-aaf7-be84439e3b96] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.057078932s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wv7b4" [eff1128e-e264-4433-aaf7-be84439e3b96] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01237074s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-998594 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-998594 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-998594 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-998594 --alsologtostderr -v=1: (1.11484961s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594: exit status 2 (423.850312ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594: exit status 2 (460.220309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-998594 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-998594 --alsologtostderr -v=1: (1.136023544s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-998594 -n default-k8s-diff-port-998594
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.52s)
E1002 22:37:26.851687 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1002 22:32:26.851710 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/old-k8s-version-456332/client.crt: no such file or directory
E1002 22:32:41.107455 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/no-preload-957566/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.19733121s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nxfbb" [f657dbc8-bf37-4213-8816-96f617dbc405] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.048988836s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-820473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-820473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-szqtg" [9f53fb38-7ba1-4476-a598-d1feffe6695b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-szqtg" [9f53fb38-7ba1-4476-a598-d1feffe6695b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.014766732s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bk8sp" [3ce891a6-948f-42c9-b127-db132422484e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.037570291s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-820473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-820473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jbnqw" [63acd46d-7c2a-4a9a-b0a8-3f82340409d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jbnqw" [63acd46d-7c2a-4a9a-b0a8-3f82340409d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.017858769s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-820473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-820473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.685670694s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (94.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m34.799576147s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (94.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-820473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-820473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wv8p2" [32b53a83-1f89-43b7-a0a7-21df92a895cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wv8p2" [32b53a83-1f89-43b7-a0a7-21df92a895cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.01126098s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-820473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1002 22:35:20.527067 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/default-k8s-diff-port-998594/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m13.710240193s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-820473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-820473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m887z" [97b60dae-3a25-4e98-a5f0-ea98e61e267b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 22:35:25.647833 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/default-k8s-diff-port-998594/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-m887z" [97b60dae-3a25-4e98-a5f0-ea98e61e267b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012472406s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-820473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1002 22:36:04.621286 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/auto-820473/client.crt: no such file or directory
E1002 22:36:19.180384 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/ingress-addon-legacy-420597/client.crt: no such file or directory
E1002 22:36:25.102393 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/auto-820473/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-820473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m25.426752093s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ct5v6" [643b9bf7-31d8-4df6-a895-c89128aa6f52] Running
E1002 22:36:37.329825 1047732 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/default-k8s-diff-port-998594/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.028353171s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-820473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-820473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-724bw" [780da4e3-dac2-49e0-81c3-f8d0a6ef07fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-724bw" [780da4e3-dac2-49e0-81c3-f8d0a6ef07fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.012326596s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-820473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-820473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-820473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-q8kzr" [3a3bbe43-b7ba-449f-a5af-3363e672119e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-q8kzr" [3a3bbe43-b7ba-449f-a5af-3363e672119e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010163107s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-820473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-820473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    

Test skip (29/299)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-380768 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-380768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-380768
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:422: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-482388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-482388
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-820473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-820473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Oct 2023 22:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-474773
contexts:
- context:
cluster: cert-expiration-474773
extensions:
- extension:
last-update: Mon, 02 Oct 2023 22:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-474773
name: cert-expiration-474773
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-474773
user:
client-certificate: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/cert-expiration-474773/client.crt
client-key: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/cert-expiration-474773/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-820473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-820473"

                                                
                                                
----------------------- debugLogs end: kubenet-820473 [took: 3.381898675s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-820473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-820473
--- SKIP: TestNetworkPlugins/group/kubenet (3.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-820473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-820473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17323-1042317/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 02 Oct 2023 22:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-474773
contexts:
- context:
cluster: cert-expiration-474773
extensions:
- extension:
last-update: Mon, 02 Oct 2023 22:08:02 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-474773
name: cert-expiration-474773
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-474773
user:
client-certificate: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/cert-expiration-474773/client.crt
client-key: /home/jenkins/minikube-integration/17323-1042317/.minikube/profiles/cert-expiration-474773/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-820473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-820473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-820473"

                                                
                                                
----------------------- debugLogs end: cilium-820473 [took: 3.798431677s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-820473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-820473
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
Copied to clipboard